EY and ACCA Release Framework to Boost Trust in AI Through Rigorous Assessments

As artificial intelligence continues to transform industries, a new policy paper by Ernst & Young (EY) and the Association of Chartered Certified Accountants (ACCA) outlines critical guidance for business leaders and policymakers to ensure AI systems are safe, reliable, and trusted. Titled “AI Assessments: Enhancing Confidence in AI”, the report emphasizes the growing importance of structured AI evaluations as adoption accelerates worldwide. 

The paper categorizes AI assessments into three core types: 

  • Governance assessments – evaluating internal structures and oversight mechanisms.
  • Conformity assessments – verifying compliance with legal and regulatory standards.
  • Performance assessments – measuring the system’s effectiveness against predefined metrics.

 

EY and ACCA argue that robust assessments enhance corporate governance, mitigate risks, and ultimately build stakeholder confidence. With AI now influencing sectors from finance to healthcare, the paper underscores the need for clearly defined methodologies, international standard alignment, and a skilled assessment ecosystem. 

Published amid evolving AI policies—including the U.S. administration’s recent AI Action Plan—the paper urges policymakers to set clear frameworks and support scalable, globally compatible standards that are not overly burdensome to businesses. 

Marie-Laure Delarue, EY’s Global Vice-Chair for Assurance, highlighted that trust is essential for unlocking AI’s full potential. ACCA CEO Helen Brand reinforced the importance of bridging skills gaps and reinforcing public trust in AI technologies.