Aletheia Framework 2.0
The Aletheia Framework 2.0 by Rolls-Royce is a forward-thinking approach to the responsible development and deployment of Artificial Intelligence (AI) systems in manufacturing and robotics. This framework is designed to ensure that AI technologies are developed with a focus on their broader social impact, accuracy/trust, and governance. It provides a structured methodology to guide organizations in navigating the complex ethical, social, and technical challenges associated with AI.
What is the Aletheia Framework 2.0?
The Aletheia Framework 2.0 is an advanced governance model that builds on Rolls-Royce’s commitment to ethical AI. It centers around three core principles: Social Impact, Accuracy/Trust, and Governance. These principles are designed to help organizations create AI systems that are not only effective but also socially responsible, transparent, and well-governed.
Core Principles of the Aletheia Framework 2.0
-
Social Impact: This principle emphasizes the importance of considering the potential impact of AI on all stakeholders, both within and outside the organization. It requires that the benefits of the AI project be clearly identifiable and that they contribute to broader social and sustainability objectives. AI systems should be designed with the well-being of society in mind, ensuring that they support positive social outcomes.
-
Accuracy/Trust: The AI system must be true, fair, and trustworthy. This means that the system should be designed to be safe, free from bias, and capable of making accurate decisions. Sufficient checks and balances must be built into the AI processes to ensure that the system remains uncorrupted and maintains its integrity over time.
-
Governance: Effective governance is crucial for the responsible deployment of AI. This principle focuses on the architecture and handling of data within the AI system, ensuring that they are adequately governed through planned protocols and checks. It also includes considerations for the overall security and accountability of the AI system, ensuring that it operates within a formalized and transparent governance structure.
Key Components of the Aletheia Framework 2.0
1. Social Impact
Context and Ethical Realisation Principles
Benefits: AI and robotics should deliver good, aligning with the EU guidelines for ethical AI. This includes commercial prosperity, improved safety, working conditions, and job satisfaction. AI deployments must improve employee well-being or public welfare, or be backed by a business case demonstrating competitive advantages.
Human Impact: AI systems should enhance positive social change and sustainability. This includes:
- Clearly defining human interaction with AI systems and understanding their impact on human behavior.
- Collaborating with HR and employees to assess potential job role changes and explore retraining or redeployment opportunities.
- Evaluating supply chain impacts, ensuring sustainability, and communicating with partners to mitigate any negative effects.
Communication: Maintain open dialogue with all key stakeholders, especially employees, to ensure understanding and involvement in AI deployment.
Loss of Skills: Assess the potential loss or reduction of skills due to AI, and determine how to sustain necessary skills for the benefit of the business.
2. Accuracy / Trust
Context and Ethical Realisation Principles
Safety/Zero Harm: AI systems must be safe and secure throughout their operational lifetime. This includes:
- Conducting formal risk analyses focused on identifying and mitigating hazards to human safety.
Transparency and Traceability: AI systems must ensure transparency and traceability in their design, inputs, and outputs. Key elements include:
- Assessing algorithms for bias or discrimination, with a clear statement of their provenance for future troubleshooting.
- Ensuring training data is of high quality, representative, and its origin is clearly stated.
- Clearly defining the decision-making hierarchy between human and AI, and demonstrating improvements over human forecasting.
Bias: AI systems must be free from unintentional or unethical biases. This requires:
- Assurance that training datasets are free from bias, with considerations for deliberate bias in specific contexts (e.g., anomaly detection).
Validity and Reliability: Trust in AI is built on validity and reliability. This involves:
- Deploying monitors to compare actual outputs with expected ranges.
- Implementing continuous automated testing with known data outputs.
- Conducting independent checks using separate assessment mechanisms, which may involve human validation.
Process Comprehensiveness and Data Integrity:
- Ensuring the thoroughness of assessments through process checks.
- Guaranteeing faultless data transmission, with techniques like Cyclic Redundancy Check (CRC) where appropriate.
Sparse Data Interpolation: The impact of sparse training data on output validity must be clearly stated and justified.
3. Governance
Context and Ethical Realisation Principles
Data Protection: Trust in AI requires strong data protection measures. Key actions include:
- Clearly stating the presence and use of any personal data.
- Declaring the legitimate purpose for using personal data and ensuring consent is obtained.
- Protecting data from unauthorized access through ‘privacy by design and by default.’
- Ensuring the system’s architecture can identify, update, amend, or remove personal data on demand, complying with privacy laws and individuals’ rights.
- Ensuring no personal data is transferred outside of the relevant legal jurisdiction.
Export Control: Data flows must be compliant with Export Control regulations:
- Describing and getting approval for data flows from an Export Control manager.
Confidential Information and Cybersecurity: To protect AI systems:
- All confidential information must be reviewed and approved by an IT security expert.
- Ensuring cybersecurity measures are in place to safeguard confidential information.
Accountability: Establishing clear responsibility and accountability for AI systems:
- Clearly identifying a business owner accountable for AI system outcomes.
- Joint algorithmic accountability should be established between developers, testers, or the DevOps team, with confidence in their contributions clearly stated.
Risks from Re-use/Transfer Across Processes:
- Knowledge transfer between AI systems should undergo formal risk assessment to identify and mitigate potential failures, with serious events reviewed before proceeding.
Why is the Aletheia Framework 2.0 Important?
As AI technologies become increasingly integrated into various industries, the need for frameworks like Aletheia 2.0 becomes critical. The framework helps organizations navigate the ethical, legal, and social complexities of AI, ensuring that their AI systems are not only innovative but also responsible and aligned with societal values.
By adopting the Aletheia Framework 2.0, organizations can enhance their reputation, build trust with stakeholders, and contribute to the broader goal of ensuring that AI serves the public good.
Overall, the Aletheia Framework 2.0 represents Rolls-Royce’s leadership in the responsible AI space, offering a practical and principled approach to developing AI systems that are socially impactful, accurate, and well-governed.