Hong Kong Monetary Authority (HKMA) GenAI Consumer Protection Principles (2024)
The Hong Kong Monetary Authority (HKMA) GenAI Consumer Protection Principles (2024) provide updated guidance for authorized institutions on the use of generative artificial intelligence (GenAI) in customer-facing applications. Issued via circular on 19 August 2024, these principles emphasize ethical and responsible deployment of GenAI technologies while prioritizing consumer protection, transparency, and trust.
This policy builds upon the HKMA’s 2019 BDAI principles and reflects evolving technological capabilities and emerging risks posed by GenAI in banking and financial services.
What are the GenAI Consumer Protection Principles?
The 2024 principles are tailored to address the specific characteristics and risks of GenAI systems, especially in high-stakes, customer-facing contexts. The core expectations include:
-
Governance and Accountability
Institutions must establish robust oversight frameworks for GenAI systems. Senior management should be accountable for ensuring that GenAI applications comply with ethical standards and regulatory requirements. -
Transparency and Explainability
Customers must be clearly informed when they are interacting with GenAI systems. Institutions must provide clear disclosures about GenAI usage and ensure that decision-making processes are understandable to users. -
Accuracy and Reliability
GenAI systems must be tested for accuracy, reliability, and appropriateness before deployment. Institutions should implement controls to monitor and validate content generated by AI to prevent hallucinations or misinformation. -
Consumer Consent and Awareness
Where customer data is used for GenAI applications, institutions must obtain clear, informed consent. Users should understand how their data may be processed, transformed, or used to train GenAI models. -
Data Privacy and Security
Institutions must adhere to privacy regulations and implement strong safeguards to protect sensitive customer information from unauthorized access or misuse, especially when used to train or inform GenAI outputs. -
Bias and Fairness
GenAI systems should be assessed and managed for risks of bias or discriminatory outcomes. Institutions must demonstrate proactive steps to identify and mitigate fairness issues in training data and model behavior. -
Human Oversight and Escalation
Human intervention should be available at all critical decision points. Customers must have access to human support, particularly when seeking resolution or clarification for GenAI-generated content or decisions. -
Monitoring and Incident Handling
Institutions should implement continuous monitoring of GenAI usage and establish protocols for addressing unintended consequences, performance degradation, or consumer complaints.
These principles apply throughout the GenAI system lifecycle and are especially relevant to virtual assistants, chatbots, generative content engines, and automated communications in financial services.
Why are the GenAI Consumer Protection Principles Important?
-
Consumer Trust in AI Interactions
By ensuring transparency and fairness in GenAI applications, institutions can foster stronger relationships with their customers and protect against reputational risk. -
Addressing Emerging Risks
GenAI introduces new challenges such as hallucinated outputs and deepfakes. This policy provides early safeguards against these risks in regulated environments. -
Regulatory Compliance and Alignment
Aligning with HKMA’s guidance supports responsible innovation while ensuring consistency with existing privacy and consumer protection laws. -
Industry Leadership in Ethical AI
Early compliance with GenAI-specific policies demonstrates leadership and commitment to responsible AI use in banking. -
Evolving from BDAI Principles
These updated principles build on the foundation of the 2019 BDAI policy, refining the approach for more advanced and impactful technologies.
By complying with the HKMA GenAI Consumer Protection Principles (2024), financial institutions strengthen trust in their AI systems, align with legal and ethical standards, and demonstrate a commitment to responsible and transparent AI governance.