By Erika Fille T. Legara IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompassesBy Erika Fille T. Legara IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompasses

What boards should demand from AI: assessment, audit, and assurance

2026/03/24 00:03
7 min read
For feedback or concerns regarding this content, please contact us at [email protected]

By Erika Fille T. Legara

IN A PREVIOUS BusinessWorld article, I argued that AI governance goes beyond overseeing a handful of technology projects and now encompasses ensuring that AI-enabled decisions across the organization remain aligned with strategy, risk appetite, and ethical standards. A natural follow-on question for boards is: beyond setting expectations, how does an organization verify that its AI systems are actually performing as intended, responsibly, and within defined boundaries?

The answer lies in three related but distinct disciplines: AI risk assessment, AI audit, and AI assurance. Boards familiar with financial oversight will find the logic intuitive. The challenge, and the opportunity, is applying that same discipline to AI.

3 DISTINCT BUT RELATED CONCEPTS
It helps to be precise about what each term means, because they are often used interchangeably when they should not be.

AI risk assessment is the internal process by which an organization identifies, evaluates, and prioritizes the risks associated with its AI systems. It asks what could go wrong, how likely it is, and what the impact would be. This is the foundation on which everything else rests. Without a credible risk assessment, neither audit nor assurance has a meaningful baseline to work from. Material AI systems exist across every sector: a credit scoring model in a bank, a patient triage tool in a hospital, a student performance predictor in a university, a case prioritization system in a government agency. What they share is the consequence, which includes outputs affecting real people in meaningful ways.

For any such system, risk assessment should be systematic, documented, and revisited regularly as the model evolves and as the operating environment changes.

AI audit is the independent examination of whether an AI system, or the governance framework surrounding it, conforms to defined standards, policies, or requirements. It is an evidence-based process conducted by a party sufficiently independent of those responsible for the system under review. An AI audit might assess whether an organization’s AI management practices conform to an internationally recognized standard, such as ISO/IEC 42001, the world’s first AI management system standard published in 2023, or whether a specific model is performing within approved parameters and without unintended bias. Importantly, the standard governing auditors themselves, ISO/IEC 42006, published in July 2025, now sets out the competence and rigor required of bodies that audit and certify AI management systems. The auditing profession, in other words, is beginning to formalize its own accountability for AI engagements.

AI assurance is the formal, stakeholder-facing conclusion that emerges from that audit process. It is the professional opinion, issued by a qualified and independent party, that gives boards, regulators, investors, and the public confidence that an AI system or AI management framework meets a defined standard. Assurance is what transforms an internal review into a credible external signal.

GROUNDING AI ASSURANCE
The concept of independent assurance is not new to boards. Every year, external auditors examine an organization’s financial statements and issue an opinion; a conclusion grounded in evidence, conducted under internationally recognized standards, and underpinned by the auditor’s professional independence. That opinion carries weight precisely because the framework governing it is rigorous and well-established. This logic applies regardless of industry; whether the organization is a bank, a hospital, a conglomerate, or a public institution, the financial audit is a familiar and trusted mechanism.

The same logic now applies to AI. When an organization makes a public or regulatory claim about its AI systems, that they are fair, transparent, compliant with a defined standard, or free from material bias, the question is: who independently validates that claim, and under what professional framework?

The answer, for the accounting and audit profession, is ISAE 3000, the International Standard on Assurance Engagements issued by the International Auditing and Assurance Standards Board (IAASB). ISAE 3000 governs assurance engagements on matters other than historical financial information, making it the natural home for AI assurance. Under this standard, a professional can conduct either a reasonable assurance engagement, the higher standard analogous to a financial audit, or a limited assurance engagement, which is closer in depth to a review. The choice of level matters and should be deliberate, calibrated to the materiality and risk of the AI system in question.

A close contemporary parallel is sustainability or ESG assurance. Many Philippine-listed companies are already commissioning independent assurance on their sustainability disclosures, often under ISAE 3000. The mechanics are exactly the same: an independent practitioner examines a set of claims against defined criteria and issues a formal conclusion. The subject matter differs; the professional discipline does not.

WHAT THIS MEANS FOR BOARDS
Three practical implications follow from this framework.

First, boards should ask whether their organizations have conducted rigorous AI risk assessments on material systems. Not a one-time exercise, but a living process that is updated as models are retrained, use cases expand, and the regulatory environment evolves. The quality of downstream audit and assurance work is only as good as the risk assessment that precedes it.

Second, boards should distinguish between internal and external AI audit. Internal audit functions play a critical role in providing assurance that AI controls operate as designed. However, boards should also consider whether an independent, third-party audit of material AI systems is warranted, particularly for systems that affect customers, employees, or the public in consequential ways. As with financial auditing, independence strengthens credibility.

Third, as organizations increasingly make public commitments about their AI practices to regulators, investors, and the communities they serve, boards should ask whether those commitments are backed by credible assurance. Assertions without independent validation are, at best, a reputational risk waiting to materialize.

A PROFESSION STILL BUILDING ITS CAPABILITIES
It would be incomplete to present this landscape without acknowledging its current limitations. The infrastructure for AI assurance is still being built. Professional standards are emerging. Auditor competencies in AI, spanning machine learning, algorithmic bias, data governance, and model transparency, are not yet uniformly developed across the profession. ISAE 3000 provides the assurance framework, but the AI-specific methodologies that sit within it are still maturing.

For organizations not yet ready to pursue formal assurance, this is not a reason to stand still. A structured, regular assessment of material AI systems is a meaningful and practical first step. It builds the internal discipline, documentation, and governance habits that assurance-readiness eventually requires. Boards that commission such assessments today, even informally, are developing institutional muscle that will matter when regulatory expectations harden and stakeholder scrutiny intensifies.

This view is one I have explored more deeply in research I have been developing with colleagues examining generative AI governance in economies where regulation has yet to catch up with technology. The central argument is that firms are already moral agents with existing ethical obligations to their stakeholders; waiting for bespoke AI legislation is neither necessary nor sufficient for responsible governance. The obligation to act is already there. What is needed is the organizational will to operationalize it.

This is not a reason for boards to wait on the broader agenda. It is a reason to ask informed questions now, of their external auditors, their internal audit functions, and their management teams, so that when the profession’s capabilities catch up with the demand, their organizations are ready to engage meaningfully.

The financial audit did not emerge fully formed. It took decades of standard-setting, professional development, and hard lessons from corporate failures for the independent audit to become the credible institution it is today. AI assurance is at a comparable early inflection point. Boards that engage with it now, ask sharper questions of their auditors, demand more than management assertions, and build internal capabilities before regulators require them to do so, will not only reduce their own exposure. They will help shape what responsible AI accountability looks like for Philippine organizations and the broader region.

Erika Fille T. Legara is a physicist, educator, and data science and AI practitioner working across government, academia, and industry. She is the inaugural managing director and chief AI and data officer of the Education Center for AI Research, and an associate professor and Aboitiz chair in Data Science at the Asian Institute of Management, where she founded and led the country’s first MSc in Data Science program from 2017 to 2024. She serves on corporate boards, is a fellow of the Institute of Corporate Directors, an IAPP Certified AI Governance Professional, and a co-founder of CorteX Innovations.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

The Channel Factories We’ve Been Waiting For

The Channel Factories We’ve Been Waiting For

The post The Channel Factories We’ve Been Waiting For appeared on BitcoinEthereumNews.com. Visions of future technology are often prescient about the broad strokes while flubbing the details. The tablets in “2001: A Space Odyssey” do indeed look like iPads, but you never see the astronauts paying for subscriptions or wasting hours on Candy Crush.  Channel factories are one vision that arose early in the history of the Lightning Network to address some challenges that Lightning has faced from the beginning. Despite having grown to become Bitcoin’s most successful layer-2 scaling solution, with instant and low-fee payments, Lightning’s scale is limited by its reliance on payment channels. Although Lightning shifts most transactions off-chain, each payment channel still requires an on-chain transaction to open and (usually) another to close. As adoption grows, pressure on the blockchain grows with it. The need for a more scalable approach to managing channels is clear. Channel factories were supposed to meet this need, but where are they? In 2025, subnetworks are emerging that revive the impetus of channel factories with some new details that vastly increase their potential. They are natively interoperable with Lightning and achieve greater scale by allowing a group of participants to open a shared multisig UTXO and create multiple bilateral channels, which reduces the number of on-chain transactions and improves capital efficiency. Achieving greater scale by reducing complexity, Ark and Spark perform the same function as traditional channel factories with new designs and additional capabilities based on shared UTXOs.  Channel Factories 101 Channel factories have been around since the inception of Lightning. A factory is a multiparty contract where multiple users (not just two, as in a Dryja-Poon channel) cooperatively lock funds in a single multisig UTXO. They can open, close and update channels off-chain without updating the blockchain for each operation. Only when participants leave or the factory dissolves is an on-chain transaction…
Share
BitcoinEthereumNews2025/09/18 00:09
Stabull’s Expansive Role in the DeFi Ecosystem

Stabull’s Expansive Role in the DeFi Ecosystem

The post Stabull’s Expansive Role in the DeFi Ecosystem appeared on BitcoinEthereumNews.com. A detailed examination of the Stabull protocol reveals its reach extends
Share
BitcoinEthereumNews2026/03/24 07:28
Stablecoin yield in crypto Clarity Act won’t allow rewards on balances, latest text says

Stablecoin yield in crypto Clarity Act won’t allow rewards on balances, latest text says

The post Stablecoin yield in crypto Clarity Act won’t allow rewards on balances, latest text says appeared on BitcoinEthereumNews.com. Crypto industry insiders
Share
BitcoinEthereumNews2026/03/24 06:58