Policy Configuration
The Asenion platform uses a hierarchical configuration system to define compliance policies, assessment controls, and scoring rules. This document covers the complete policy configuration schema.
Table of Contents
- Overview
- Top-Level Structure
- Organization
- Policies
- Control Bundles
- Controls
- Answer Types
- Answer Options
- Threshold-Based Scoring
- Card Types
- Control Visibility
- Policy Level
- Complete Configuration Example
- Enum Reference
- Risk & Alignment Scoring
- Versioning
- Best Practices
Overview
A Policy in Asenion represents a regulatory framework or compliance standard (e.g., EU AI Act, NIST AI RMF, internal governance policies). Policies are composed of Control Bundles (logical groups of requirements) which in turn contain Controls (individual questions or requirements that users must answer).
Policy
├── Control Bundle 1
│ ├── Control A
│ ├── Control B
│ └── Control C
├── Control Bundle 2
│ ├── Control D
│ └── Control E
└── ...
The full policy configuration is a JSON document containing three top-level sections:
| Section | Description |
|---|---|
organization | The organization publishing the policy |
policies | One or more policy definitions with control bundle references and compliance levels |
controlBundles | The actual control bundle definitions including controls, answer options, and scoring |
Top-Level Structure
{
"organization": { ... },
"policies": [ ... ],
"controlBundles": [ ... ]
}
Organization
Defines the organization that owns or publishes the policy.
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Display name of the organization |
identifier | string | Yes | Unique reverse-domain identifier (e.g., com.fairly.ai) |
pgId | string | No | External identifier used to link the organization to an external system or database |
Example:
{
"organization": {
"name": "ABC",
"identifier": "com.fairly.ai",
"pgId": "2"
}
}
Policies
An array of policy definitions. Each policy groups multiple control bundles under a single compliance framework.
Policy Object
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Display name of the policy |
identifier | string | Yes | Unique reverse-domain identifier for the policy (e.g., com.fairly.ai.euaiact) |
version | string | Yes | Semantic version string (e.g., "1.0", "2.1") |
description | string | No | Human-readable description of the policy |
link | string | No | URL to the original regulation or standard |
labels | string[] | No | Tags for categorization, filtering, and report behavior (see Policy Labels) |
controlBundles | ControlBundleRef[] | Yes | References to control bundles included in this policy |
compliance | Compliance[] | No | Compliance level definitions with score ranges |
dimensions | PolicyDimension[] | No | Risk/scoring dimensions for multi-dimensional analysis |
assessMethod | string | No | Assessment methodology identifier |
applicableProjectTypes | ProjectType[] | Yes | Which project types this policy can be applied to |
reportTemplateId | string | No | ID of the report template to use when generating reports |
isSystemLevel | boolean | No | If true, the policy is system-wide and available to all organizations. Default: false |
orgId | string | No | Organization ID. Required for organization-level policies; omit for system-level policies |
policyLevel | PolicyLevel | Read-only | Returned by the API as SYSTEM or ORGANIZATION. Derived automatically from isSystemLevel; do not set in config |
Control Bundle Reference
Each entry in the controlBundles array is a lightweight reference (not the full bundle definition). The actual bundle is defined in the top-level controlBundles array.
| Property | Type | Required | Description |
|---|---|---|---|
identifier | string | Yes | Identifier of the control bundle to include |
version | string | Yes | Version of the control bundle to include |
Compliance Level
Compliance levels define named tiers based on aggregate assessment scores.
| Property | Type | Required | Description |
|---|---|---|---|
level | string | Yes | Display name of the compliance level (e.g., "Gold", "Certified") |
compliant | boolean | Yes | Whether this level qualifies as compliant |
score_min | number | No | Minimum score (inclusive) to achieve this level |
score_max | number | No | Maximum score (exclusive) for this level. Omit for the highest tier |
Policy Dimension
Dimensions allow multi-dimensional scoring beyond a single aggregate score.
| Property | Type | Required | Description |
|---|---|---|---|
identifier | string | Yes | Unique identifier for the dimension |
name | string | Yes | Display name |
description | string | No | Description of what the dimension measures |
thresholds | number[] | No | Threshold values for bucketing scores |
ds | PolicyDimensionScore[] | No | Dimension score definitions |
labels | string[] | No | Tags for the dimension |
aggregationType | DimensionAggregationType | Yes | How scores are aggregated: SUM, AVG, MAX, MIN, or PERCENT |
Policy Dimension Score
| Property | Type | Required | Description |
|---|---|---|---|
name | string | No | Display name for the score level |
score | number | Yes | The score value |
description | string | No | Description of this score level |
type | DimensionScoreType | Yes | Severity: NA, INFO, WARNING, or CRITICAL |
Applicable Project Types
A policy must declare which project types it applies to. Available values:
| Value | Description |
|---|---|
AI_SYSTEM | Top-level AI system |
FUNCTIONAL_MODEL | Functional component of an AI system |
MODEL_CANDIDATE | ML model being evaluated |
MODEL_CHAMPION | Champion ML model (selected winner) |
AGENT_CANDIDATE | AI agent being evaluated |
VENDOR_AGENT | Third-party vendor AI agent |
VENDOR_MODEL | Third-party vendor model |
DATASET | Dataset used by an AI system |
ORGANIZATION | Organization-level assessment |
Policy Labels
Labels on policies serve multiple purposes — categorization, filtering, and controlling report/UI behavior. You can use any custom string, but the following patterns have special meaning:
| Label Pattern | Description |
|---|---|
Inherent Risks | Indicates this policy covers inherent risk assessment |
Not Started | Status label for filtering policies by assessment progress |
Started | Status label for filtering policies by assessment progress |
report.project.assessment_details.hide | Hides the assessment details section in generated reports |
Custom labels (e.g., "raii", "eu-ai-act", "internal") can be used freely for filtering and organization in the UI.
Policy Example
{
"name": "ABC Responsible AI Assessment Checklist - Pre-Screening",
"identifier": "com.fairly.ai.projectinfo",
"reportTemplateId": "com.abc.rai.checklist",
"version": "1.0.1",
"description": "The purpose of this checklist is to assess projects involving AI-based solutions, ensuring alignment with ethical and responsible AI principles throughout their lifecycle.",
"link": "https://www.abc.example/",
"controlBundles": [
{ "identifier": "com.abc.org", "version": "1.0.1" },
{ "identifier": "com.abc.general", "version": "1.0.1" },
{ "identifier": "com.abc.prescreen", "version": "1.0.1" }
],
"labels": [
"Inherent Risks",
"Not Started",
"Started",
"report.project.assessment_details.hide"
],
"applicableProjectTypes": ["AI_SYSTEM", "FUNCTIONAL_MODEL", "MODEL_CANDIDATE", "AGENT_CANDIDATE"],
"compliance": [
{ "level": "Non-Compliant", "compliant": false, "score_min": 0, "score_max": 49.9 },
{ "level": "Compliant", "compliant": true, "score_min": 50, "score_max": 79.9 },
{ "level": "Fully Compliant", "compliant": true, "score_min": 80 }
]
}
Control Bundles
Control bundles are the building blocks of a policy. Each bundle groups related controls under a common theme (e.g., “Fairness”, “Transparency”, “Accountability”).
Control Bundle Object
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Display name of the control bundle |
identifier | string | Yes | Unique reverse-domain identifier (e.g., com.fairly.ai.raii.dimension1) |
version | string | Yes | Version string |
description | string | No | Description of the bundle’s purpose |
weight | number | No | Relative weight of this bundle in aggregate scoring (default: 1.0) |
labels | string[] | No | Tags for categorization. Special labels include operational_risk, ai_tests |
compliance_min_percent | number | No | Minimum completion percentage for compliance |
compliance_max_percent | number | No | Maximum completion percentage threshold |
score_max | number | No | Maximum alignment score possible for this bundle |
risk_max | number | No | Maximum risk score possible for this bundle |
controls | Control[] | Yes | Array of control definitions |
dimensions | ControlBundleDimension[] | No | Dimensions specific to this bundle (see below) |
applicableProjectTypes | ProjectType[] | No | Override policy-level project type restrictions |
Control Bundle Dimension
Dimensions on a control bundle allow per-bundle multi-dimensional scoring. Note that the ds field here is an array of raw Float values (not objects), unlike PolicyDimension.ds which contains structured score objects.
| Property | Type | Required | Description |
|---|---|---|---|
identifier | string | Yes | Unique identifier for the dimension |
name | string | Yes | Display name |
description | string | No | Description of what the dimension measures |
thresholds | number[] | No | Threshold values for bucketing scores |
ds | number[] | No | Raw score values for the dimension |
labels | string[] | No | Tags for the dimension |
aggregationType | DimensionAggregationType | Yes | How scores are aggregated: SUM, AVG, MAX, MIN, or PERCENT |
Special Labels
Labels on control bundles influence how the platform computes risk and categorizes the bundle:
| Label | Effect |
|---|---|
operational_risk | Controls in this bundle contribute to the Operational Risk score |
ai_tests | Controls in this bundle contribute to the Model Risk score |
project_info | Informational only; typically not scored. Used for project metadata collection |
use_case_info | Informational only; used for AI use case metadata collection |
Note: Only the exact label
operational_risk(singular) is recognized by the risk computation engine. If you use a variant likeoperational_risks(plural), it will not automatically contribute to risk scoring — it will be treated as a custom label.
Control Bundle Example
{
"name": "General Information",
"identifier": "com.abc.general",
"version": "1.0.1",
"description": "General information for this AI use case.",
"weight": 1.0,
"labels": ["use_case_info"],
"controls": [ ... ]
}
Controls
A control is an individual question or requirement within a control bundle. Each control specifies its answer type, answer options, and scoring rules.
Control Object
| Property | Type | Required | Description |
|---|---|---|---|
identifier | string | Yes | Unique identifier within the bundle (e.g., dimension1.biasimpact.1) |
name | string | Yes | Display name / title |
description | string | No | Detailed description or guidance text |
question | string | No | The question prompt shown to users |
answerType | AnswerType | Yes | Type of answer input (see Answer Types) |
answerOptions | AnswerOption[] | No | Available answer choices (for scored/selection types) |
optional | boolean | No | If true, the control is not required for completion. Default: false |
formula | string | No | Formula for computed controls |
defaultValue | string | No | Pre-populated default value |
labels | string[] | No | Tags for categorization and filtering |
mainCategory | string | No | Primary categorization |
subCategories | string[] | No | Secondary categorizations |
cardTypes | CardType[] | No | Card templates (required when answerType is CARD) |
controlSource | ControlSource[] | No | References to the original regulatory source |
controlCitations | ControlCitation[] | No | Detailed citations to regulatory text |
visibilityType | VisibilityType | No | INDEPENDENT (always visible) or DEPENDENT (shown based on rules) |
framework | string[] | No | Framework identifiers this control maps to |
frameworkLevel | string[] | No | Level within the framework hierarchy |
technicalArea | string[] | No | Technical domain tags |
role | string[] | No | Roles responsible for this control |
usecase | string[] | No | Applicable use case tags |
prerequisiteState | string[] | No | States that must be met before this control becomes active |
controlConclusion | string[] | No | Predefined conclusion options |
industry | string[] | No | Industry-specific tags |
evidence | string[] | No | Evidence attachment labels or identifiers associated with the control |
Control Source
| Property | Type | Required | Description |
|---|---|---|---|
sourceReference | string | No | Reference ID from the regulation (e.g., article/section number) |
sourceText | string | No | Relevant excerpt from the source text |
Control Citation
| Property | Type | Required | Description |
|---|---|---|---|
reference | string | No | Reference identifier |
text | string | No | Citation text |
citationSource | string | No | Source document name |
citationTitle | string | No | Title of the cited section |
citationText | string | No | Full text of the citation |
Answer Types
The answerType field determines what UI component is rendered and how the answer is stored and scored.
| Answer Type | Description | Scored | Answer Options Required |
|---|---|---|---|
SCORE_CHECKBOX | Multiple-selection checkboxes where each option has a score value | Yes | Yes |
SCORE_MULTIPLE_CHOICE | Single-selection radio buttons where each option has a score value | Yes | Yes |
TEXT_TEXT | Free-text single-line input | No | No |
TEXT_TEXT_MULTI | Free-text multi-line input (textarea) | No | No |
TEXT_TEXT_CHECKLIST | Checklist of text items | No | No |
TEST_SCORE | Numeric test result input with threshold-based scoring | Yes | Yes (with thresholds) |
CARD | Structured data cards for repeatable entries | Varies | No (uses cardTypes) |
DOC_UPLOAD | File/document upload | No | No |
Answer Options
Answer options define the possible answers for scored controls and how each answer maps to alignment, risk, and dimension scores.
Answer Option Object
| Property | Type | Required | Description |
|---|---|---|---|
identifier | string | No | Unique identifier for the option |
type | AnswerOptionType | Yes | Type of option (see below) |
value | number | No | Numeric score value for this option |
answer | string | No | Display text for the answer choice |
defaultValue | number | No | Default value (for slider/threshold types) |
thresholds | number[] | No | Bucket boundaries for threshold-based scoring. Used on both SCORE_THRESHOLD and SCORE option types (see How value and thresholds Work Together) |
alignmentScore | number[] | No | Alignment score for each threshold bucket (one value per bucket, i.e., thresholds.length - 1 values) |
riskScore | number[] | No | Risk score for each threshold bucket (one value per bucket, i.e., thresholds.length - 1 values) |
ds | DimensionScore[] | No | Per-dimension score contributions |
nextControl | string | No | Identifier of the next control to show (for conditional flows) |
risk | AnswerOptionRisk[] | No | Risk metadata associated with this option |
scoreMin | number | No | (Deprecated) Minimum score for slider |
scoreMax | number | No | (Deprecated) Maximum score for slider |
thresholdLow | number | No | (Deprecated) Low threshold |
thresholdHigh | number | No | (Deprecated) High threshold |
Answer Option Types
| Type | Description |
|---|---|
SCORE | Fixed score value. Selecting this option contributes value to the total score. |
SCORE_THRESHOLD | Threshold-based scoring. Uses thresholds, alignmentScore, and riskScore arrays to compute scores based on the entered numeric value. |
TEXT_TEXT | Text-only option with no scoring impact. |
Dimension Score
Maps an answer option to a specific scoring dimension.
| Property | Type | Required | Description |
|---|---|---|---|
dimension | string | Yes | Identifier of the dimension |
score | number | Yes | Score contribution to that dimension |
Answer Option Risk
Metadata describing the risk implication of an answer.
| Property | Type | Required | Description |
|---|---|---|---|
riskType | string[] | No | Categories of risk (e.g., ["bias", "fairness"]) |
riskExplanation | string | No | Human-readable explanation of the risk |
How value and thresholds Work Together
Answer options can carry two independent scoring mechanisms that serve different purposes:
| Field | Purpose | Used For |
|---|---|---|
value | The compliance/alignment score contributed when this option is selected | Completion percentage, aggregate compliance scoring |
thresholds + riskScore + alignmentScore | Risk and alignment classification based on threshold buckets | Risk status computation (HIGH / MEDIUM / LOW), alignment status |
These are not mutually exclusive. A single SCORE option commonly has both:
{
"type": "SCORE",
"value": 1,
"answer": "Yes",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
In this example:
value: 1— selecting “Yes” adds 1 point to the compliance score.thresholds: [0, 0.5, 1]withriskScore: [0, 1]— defines how the option’s value maps to risk (thevalueof1falls in the second bucket[0.5, 1], yielding a risk score of1).
When are thresholds used on SCORE options?
For SCORE_MULTIPLE_CHOICE and SCORE_CHECKBOX controls, the platform uses the value field for compliance scoring, but also uses thresholds/riskScore/alignmentScore to compute risk and alignment statuses. This is why the ABC example includes threshold arrays on every scored answer option.
When is only thresholds used (no value)?
For TEST_SCORE controls with SCORE_THRESHOLD options, only the threshold arrays matter. The user enters a raw numeric value, and the platform looks up which bucket it falls into to determine the risk and alignment scores. See Threshold-Based Scoring.
Threshold-Based Scoring
The thresholds, alignmentScore, and riskScore arrays work together to create a bucket-based scoring system. This mechanism is used in two contexts:
TEST_SCOREcontrols — the user enters a raw number, which is placed into a threshold bucket.SCORE_MULTIPLE_CHOICE/SCORE_CHECKBOXcontrols — each answer option’svalueis placed into the threshold bucket to derive risk and alignment scores.
How It Works
The thresholds array defines bucket boundaries. For n thresholds, there are n - 1 buckets. The alignmentScore and riskScore arrays each have n - 1 values, one per bucket.
thresholds: [0, 0.5, 0.8, 1.0]
|------|------|------|
bucket0 bucket1 bucket2
alignmentScore: [0, 0.5, 1.0]
riskScore: [1.0, 0.5, 0]
Scoring example:
| Input Value | Bucket | Alignment Score | Risk Score |
|---|---|---|---|
0.3 | [0, 0.5) — bucket 0 | 0 | 1.0 |
0.6 | [0.5, 0.8) — bucket 1 | 0.5 | 0.5 |
0.9 | [0.8, 1.0] — bucket 2 | 1.0 | 0 |
Card Types
Card types define structured, repeatable data entry templates. They are used when a control has answerType: "CARD".
Card Type Object
| Property | Type | Required | Description |
|---|---|---|---|
cardTitle | string | Yes | Title of the card template |
cardDescription | string | No | Description of the card purpose |
cardFields | CardField[] | Yes | Array of field definitions within the card |
Card Field
| Property | Type | Required | Description |
|---|---|---|---|
type | AnswerType | Yes | Input type for this field (uses the same AnswerType enum) |
label | string | Yes | Display label for the field |
required | boolean | No | Whether the field is required |
defaultValue | string | No | Default value |
answerOptions | AnswerOption[] | No | Answer options (if the field is a selection type) |
Control Visibility
Controls can be conditionally shown or hidden based on policy rules.
| Visibility Type | Description |
|---|---|
INDEPENDENT | Always visible regardless of other answers |
DEPENDENT | Visibility depends on external conditions (e.g., screening results or policy rules) |
The nextControl field on answer options can also create sequential flows where selecting a specific answer reveals the next control.
Policy Level
| Level | Description |
|---|---|
SYSTEM | Available to all organizations on the platform. Managed by platform admins. |
ORGANIZATION | Scoped to a single organization. Managed by org admins. |
Complete Configuration Example
Below is a real-world example of a complete policy configuration — ABC’s Responsible AI Pre-Screening Checklist. It demonstrates three control bundles with different purposes: project information (text inputs), general AI use case details (mix of text and scored choices), and a pre-screening checklist (scored multiple choice questions).
{
"organization": {
"name": "ABC",
"identifier": "com.fairly.ai",
"pgId": "2"
},
"policies": [
{
"name": "ABC Responsible AI Assessment Checklist - Pre-Screening",
"identifier": "com.fairly.ai.projectinfo",
"reportTemplateId": "com.abc.rai.checklist",
"version": "1.0.1",
"description": "The purpose of this checklist is to assess projects involving AI-based solutions, ensuring alignment with ethical and responsible AI principles throughout their lifecycle.",
"link": "https://www.abc.example/",
"controlBundles": [
{ "identifier": "com.abc.org", "version": "1.0.1" },
{ "identifier": "com.abc.general", "version": "1.0.1" },
{ "identifier": "com.abc.prescreen", "version": "1.0.1" }
],
"labels": [
"Inherent Risks",
"Not Started",
"Started",
"report.project.assessment_details.hide"
],
"applicableProjectTypes": ["AI_SYSTEM", "FUNCTIONAL_MODEL", "MODEL_CANDIDATE", "AGENT_CANDIDATE"],
"compliance": [
{ "level": "Non-Compliant", "compliant": false, "score_min": 0, "score_max": 49.9 },
{ "level": "Compliant", "compliant": true, "score_min": 50, "score_max": 79.9 },
{ "level": "Fully Compliant", "compliant": true, "score_min": 80 }
]
}
],
"controlBundles": [
{
"name": "Project Information",
"identifier": "com.abc.org",
"version": "1.0.1",
"description": "Project information for this AI use case.",
"weight": 1.0,
"labels": ["project_info"],
"controls": [
{
"identifier": "org.name",
"name": "Organization Name",
"question": "Company or Organization Name",
"labels": ["org.name"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "org.department",
"name": "Business Owner",
"question": "Business Owner (Division/Department/Unit)",
"labels": ["branch"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "org.project",
"name": "Project Name",
"question": "Project Name",
"labels": ["project.name"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "org.project.desc",
"name": "Project Description",
"question": "Project Description",
"labels": ["project.Description"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "org.project.owner",
"name": "Project Owner",
"question": "Project Owner",
"labels": ["project.owner"],
"answerType": "TEXT_TEXT"
}
]
},
{
"name": "General Information",
"identifier": "com.abc.general",
"version": "1.0.1",
"description": "General information for this AI use case.",
"weight": 1.0,
"labels": ["use_case_info"],
"controls": [
{
"identifier": "info.usercase",
"name": "AI Use Case / Purpose",
"description": "Describe the specific function, process, or business objective that the AI solution is designed to support or enhance.",
"question": "AI Use Case / Purpose:",
"labels": ["info.usercase"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "info.app",
"name": "AI Application / System Name:",
"description": "Provide the name of the AI application or system.",
"question": "AI Application / System Name:",
"labels": ["info.app"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "info.deployenv",
"name": "System Deployment Environment",
"description": "Specify the type of environment used to deploy or host this AI system.",
"question": "System Deployment Environment",
"labels": ["info.deployenv"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "On-cloud (AWS)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "On-cloud (Azure)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "On-cloud (GCP)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "On-cloud (Other)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "On-premises",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "Hybrid (Cloud + on-premises)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "SaaS (Third-party Hosted)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "info.devapproach",
"name": "AI Development Approach",
"description": "Specify the type of AI model used in this system.",
"question": "AI Development Approach:",
"labels": ["info.devapproach"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "External Prebuilt AI Service",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "In-house Custom Model Development",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "Fine-tuned Pretrained Model",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "Open-Source Model Integration",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "Hybrid (External + Internal)",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 1,
"answer": "Third-party SaaS AI Solution",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "TEXT_TEXT",
"value": 1,
"answer": "Other (please specify): ",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "info.vendor",
"name": "AI Model / Vendor Used",
"description": "Specify the model name, version, and vendor (e.g., GPT-4 by OpenAI).",
"question": "AI Model / Vendor Used:",
"labels": ["info.vendor"],
"answerType": "TEXT_TEXT"
},
{
"identifier": "info.datasources",
"name": "Primary Data Sources",
"description": "Identify the key data types used in this AI use case (e.g., financial transactions, customer profiles, audio data).",
"question": "Primary Data Sources:",
"labels": ["info.datasources"],
"answerType": "TEXT_TEXT"
}
]
},
{
"name": "Pre-RAI Screening Checklist",
"identifier": "com.abc.prescreen",
"version": "1.0.1",
"description": "The purpose of this pre-screen checklist is to identify applicable checklist items.",
"weight": 1.0,
"labels": ["project_info", "operational_risks"],
"compliance_min_percent": 0,
"score_max": 0,
"controls": [
{
"identifier": "project.sysinterface",
"name": "User Interface with AI",
"description": "End-users interact directly with the AI (e.g., type, speak, or upload data).",
"question": "Does the system have an interface that allows end-users to input data or interact with the AI directly?",
"labels": ["project.ai.sysinterface"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "Yes: The system has an interface that allows users to input data or interact directly with the AI.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 0,
"answer": "No: The system does not have an interface that allows users to input data or interact directly with the AI. It functions only as a backend engine for internal organizational use (e.g., data analytics, decision support).",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "project.sysfreeforminputs",
"name": "Free-Form Input or Prompt",
"description": "AI accepts open-ended inputs (not just fixed choices), like voice or typed queries.",
"question": "Does the system accept free-form inputs from end-users or backend sources?",
"labels": ["project.ai.sysinterface"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "Yes: The system supports free-form inputs (e.g., text prompts, voice commands, or images). These inputs are open-ended and unstructured, allowing flexible interaction.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 0,
"answer": "No: The system does not accept or interpret free-form inputs for generating content or making decisions. It only processes structured or predefined inputs.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "project.systrainedmodel",
"name": "Output from Trained Model",
"description": "AI generates results based on prior training, not just user-uploaded content.",
"question": "Does the AI system use a trained model that interprets user prompts to guide its responses or generate outputs?",
"labels": ["project.ai.systrainedmodel"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "Yes: The system uses a trained AI model that interprets user prompts to guide its responses.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 0,
"answer": "No: The system does not generate responses based on a trained model's generalization. It only processes or summarizes the specific content provided by the user.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "project.sysmodelresponse",
"name": "Model Access or Adjustability",
"description": "The model can be accessed or adjusted by your team (or vendor upon request).",
"question": "Can you influence or customize how the AI model responds, either by modifying the model itself or by shaping its behavior through prompt design, system instructions, or configuration options?",
"labels": ["project.ai.sysmodelresponse"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "Yes: You can influence the AI's output through direct fine-tuning or retraining of the model, or by shaping its behavior using prompt design, system-level instructions, or configuration settings.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 0,
"answer": "No: You cannot influence how the AI responds. The system behaves in a fixed manner and does not support model fine-tuning, prompt engineering, or configuration changes.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
},
{
"identifier": "project.sysdata",
"name": "Use of Personal or Sensitive Data",
"description": "The system uses or produces output based on personal or sensitive data.",
"question": "Does the AI system collect, process, store, or generate outputs based on personal or sensitive data?",
"labels": ["project.ai.sysmodelresponse"],
"answerType": "SCORE_MULTIPLE_CHOICE",
"answerOptions": [
{
"type": "SCORE",
"value": 1,
"answer": "Yes: The system handles any form of personal or sensitive data, whether directly or indirectly.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
},
{
"type": "SCORE",
"value": 0,
"answer": "No: The system does not handle any personal or sensitive data in any form.",
"thresholds": [0, 0.5, 1],
"riskScore": [0, 1],
"alignmentScore": [1, 1]
}
]
}
]
}
]
}
Key Patterns in This Example
This configuration demonstrates several important patterns:
| Pattern | Where | Description |
|---|---|---|
| Text-only bundle | com.abc.org | Uses only TEXT_TEXT controls for metadata collection. Labeled project_info so it is not scored. |
| Mixed bundle | com.abc.general | Combines TEXT_TEXT and SCORE_MULTIPLE_CHOICE controls. Uses use_case_info label. |
| Screening bundle | com.abc.prescreen | All SCORE_MULTIPLE_CHOICE with Yes/No pattern. Sets score_max: 0 and compliance_min_percent: 0. |
| TEXT_TEXT option type | info.devapproach | Last answer option uses "type": "TEXT_TEXT" to allow free-text “Other” input alongside scored options. |
| Threshold scoring on SCORE options | All scored controls | Even SCORE type options include thresholds, riskScore, and alignmentScore for risk computation. |
| Report label | Policy labels | report.project.assessment_details.hide suppresses assessment detail in the generated report. |
| Compliance tiers | Policy | Three compliance levels with non-overlapping score ranges; highest tier omits score_max. |
| Project type scoping | Policy | applicableProjectTypes restricts which project types can use this policy. |
Enum Reference
AnswerType
| Value | Description |
|---|---|
SCORE_CHECKBOX | Multi-select with scoring |
SCORE_MULTIPLE_CHOICE | Single-select with scoring |
TEXT_TEXT | Single-line text input |
TEXT_TEXT_MULTI | Multi-line text input |
TEXT_TEXT_CHECKLIST | Checklist text input |
TEST_SCORE | Numeric input with threshold scoring |
CARD | Structured card data entry |
DOC_UPLOAD | File upload |
AnswerOptionType
| Value | Description |
|---|---|
SCORE | Fixed numeric score |
SCORE_THRESHOLD | Threshold-based scoring with buckets |
TEXT_TEXT | Text-only (no scoring) |
DimensionAggregationType
| Value | Description |
|---|---|
SUM | Sum all scores in the dimension |
AVG | Average all scores |
MAX | Take the maximum score |
MIN | Take the minimum score |
PERCENT | Calculate as a percentage |
DimensionScoreType
| Value | Description |
|---|---|
NA | Not applicable |
INFO | Informational severity |
WARNING | Warning severity |
CRITICAL | Critical severity |
PolicyLevel
| Value | Description |
|---|---|
SYSTEM | System-wide, available to all orgs |
ORGANIZATION | Scoped to a single organization |
VisibilityType
| Value | Description |
|---|---|
INDEPENDENT | Always visible |
DEPENDENT | Conditionally visible |
ProjectType
| Value | Description |
|---|---|
AI_SYSTEM | Top-level AI system |
FUNCTIONAL_MODEL | Functional component of an AI system |
MODEL_CANDIDATE | ML model under evaluation |
MODEL_CHAMPION | Selected champion model |
AGENT_CANDIDATE | AI agent under evaluation |
VENDOR_AGENT | Third-party vendor agent |
VENDOR_MODEL | Third-party vendor model |
DATASET | Dataset |
ORGANIZATION | Organization-level |
Risk & Alignment Scoring
How Scores Are Computed
Scoring flows from answers up through assessments to the project level:
1. Answer Level
For SCORE type options, the value field contributes to the compliance score. If the option also has thresholds, riskScore, and alignmentScore arrays, the value is placed into a threshold bucket to determine risk and alignment scores. For SCORE_THRESHOLD options (used with TEST_SCORE controls), only the threshold arrays are used — the user’s raw numeric input is bucketed directly.
2. Assessment Level
All answer scores within an assessment are aggregated:
- Risk Score: Sum of answer risk scores / maximum possible risk score. Compared against configurable thresholds to produce a risk status of
HIGH,MEDIUM, orLOW. - Alignment Score: Sum of answer alignment scores / maximum possible alignment score.
3. Project Level
Assessment-level scores are combined using weighted averages across all assessments in the project.
Configurable Thresholds
These platform-level settings control how numeric scores translate to risk statuses:
| Setting | Default | Description |
|---|---|---|
PLATFORM_UNACCEPTABLE_RISK_PERCENT | 75 | Score percent above which risk is HIGH |
PLATFORM_RISK_APPETITE_PERCENT | 25 | Score percent at or below which risk is LOW |
Versioning
Both policies and control bundles are versioned independently. This allows:
- Policy version bumps when the compliance structure changes (new bundles added, compliance levels adjusted).
- Control bundle version bumps when controls are added, removed, or modified.
The combination of identifier + version must be unique across the platform (within the same policy level).
Best Practices
-
Use reverse-domain identifiers: Follow the pattern
com.yourorg.policyname.bundlenamefor globally unique identifiers. -
Version carefully: Bump the version when making changes. Existing assessments reference a specific
identifier:versionpair. -
Set compliance levels: Define clear compliance tiers with non-overlapping score ranges.
-
Use meaningful labels: Labels drive risk computation categories (
operational_risk,ai_tests) and enable filtering in the UI. -
Mix answer types: Combine scored controls (
SCORE_CHECKBOX,SCORE_MULTIPLE_CHOICE) with qualitative controls (TEXT_TEXT,DOC_UPLOAD) for comprehensive assessments. -
Define thresholds consistently: Always ensure the
thresholdsarray has one more element than thealignmentScoreandriskScorearrays. Include thresholds onSCOREoptions too if you need risk/alignment computation. -
Keep controls atomic: Each control should ask one clear question. Use control bundles to group related questions.
-
Use
operational_risk(singular): Only the exact labeloperational_riskis recognized by the risk engine. Variants likeoperational_riskswill not trigger risk computation.