The Dual Mandate – A strategic framework for GCC organizations navigating algorithmic accountability
“When did your internal audit function last issue a finding about an AI system? Not cybersecurity in general – but specifically about an algorithm your organization uses to make consequential decisions?”
If the answer is never, you are not alone. But the clock is running. According to a 2025 Audit Board survey, 61% of internal audit leaders admit they lack AI expertise, yet they simultaneously rank AI risks as their lowest concern among 14 key risk areas they monitor. Former IIA CEO Richard Chambers calls this a “significant credibility gap.” I call it the defining professional blind spot of our generation.
In the UAE and GCC, this gap is more than a professional issue; it is a strategic liability. With AI forecast to contribute up to 20% of the UAE’s non-oil GDP by 2031, and the national AI market projected to reach AED 170 billion by 2030, algorithms are no longer emerging risks. They are operating the infrastructure. And someone needs to be providing independent assurance on them.
That someone is the internal audit. If it is ready.
| 62%
GCC companies using AI in core functions |
61%
IA leaders lacking AI expertise (AuditBoard, 2025) |
2–4%
IA depts with substantial AI implementation |
AED 170B
UAE AI market size projected by 2030
|
1 – The Dual Mandate: Two Obligations, One Profession
The story of AI in internal audit has a dual mandate — and most organizations are only reading the first. Here is what both chapters demand.
MANDATE 1 · Use AI to Audit Better
- What it is: Leverage ML, GenAI, NLP, and process mining to deliver assurance that is faster, wider, deeper, and continuous.
- Key tools: Continuous Controls Monitoring, Predictive Risk Analytics, NLP Document Review, GenAI-Assisted Reporting
- IIA Standard: Standard 13.1 – Technology and Data Analytics: use technology where it enhances audit effectiveness
MANDATE 2 · Audit AI to Govern It
- What it is: Develop competency to evaluate AI systems the organization deploys for governance, accuracy, bias risks, and regulatory alignment.
- Key areas: Model Bias Audits, Algorithmic Drift Testing, Shadow AI Reviews, Deepfake Fraud Control Assessments
- IIA Standard: Standard 9.2 – Risk Assessment must include AI-driven decision systems in the risk universe
These two mandates are not parallel tracks – they are a virtuous cycle. The auditor who uses AI will better understand its risks. The auditor who understands AI risks will deploy tools more responsibly.
2 – The UAE Context: Why This Is Not Optional
In October 2017, the UAE made history by appointing the world’s first Minister of State for Artificial Intelligence. The UAE National AI Strategy 2031 followed, setting one of the most ambitious AI integration agendas on the planet.
| AED 335B
Additional growth targeted via AI by 2031 |
44%
CAGR of UAE AI market through 2030 |
$15.2B
Microsoft’s committed UAE AI investment through 2029 |
58%
Surge in UAE ransomware activity (Cybersecurity Council, 2025) |
When AI is a national strategy, not just a vendor product, the governance and assurance expectations around it move from ‘nice to have’ to regulatory and reputational imperative. That is the operating environment for every auditor in this room.
CASE STUDY: The US$25M Deepfake CFO – A Warning for GCC Finance Teams
In early 2024, global engineering firm Arup lost US$25 million in a single attack.
A finance employee received an urgent payment request from the UK-based CFO. Suspicious, they requested a video conference to verify. The ‘CFO’ and every ‘senior leader’ on that call were entirely AI-generated deepfakes — convincing enough that the employee completed 15 transactions to five fraudulent accounts.
Key statistics: The FBI’s 2024 Internet Crime Report recorded US$2.6 billion in global losses from Business Email Compromise and vishing. Deepfake attacks have been doubling annually since 2022. A 2024 Deloitte survey found 25.9% of executives had already experienced at least one deepfake incident targeting financial data.
For UAE organizations: The UAE Cybersecurity Council 2025 report identifies financial services and banking as among the highest-exposure sectors. Does your internal audit programme cover the controls preventing exactly this?
3 – Mandate 1: The 5 Engines of AI-Powered Auditing
MetricStream’s 2025 GRC Practitioner Survey found that 48.24% of risk teams now have active AI pilots in risk monitoring – the highest adoption category in the survey. The technology is proven. The barrier is deployment.
01- Continuous Controls Monitoring (CCM)
Traditional audit tests a 5% sample once a year. AI-powered CCM tests 100% of transactions continuously — flagging anomalies the moment they occur rather than 11 months later. For UAE financial institutions, this means detecting split invoices, vendor master changes, and Hawala typologies in real time.
02- NLP for Regulatory Intelligence
UAE organizations navigate CBUAE regulations, SCA requirements, DIFC rulebooks, and DFSA frameworks simultaneously. NLP tools track regulatory changes, map obligations to control frameworks automatically, and flag gaps before the regulator does.
03- Process Mining & Digital Footprint Analysis
Every ERP and procurement system generates a complete digital record. Process mining reconstructs actual processes versus designed processes – revealing deviations and control failures that are invisible to traditional interview-based auditing.
04- Generative AI for Planning & Reporting
Risk assessments, audit programmes, and draft reports that took days now take hours. Auditors refocus their time on judgment, interpretation, and stakeholder engagement — the work that actually requires human intelligence.
05- Predictive Risk Analytics
In a region as geopolitically dynamic as the GCC, AI models that update the organization’s risk universe daily are not a luxury. They are a competitive necessity for audit functions that want to remain relevant to their boards
4 – Mandate 2: Auditing AI – The Risks Nobody’s Catching
AI systems are already making consequential decisions inside UAE and GCC organizations — credit approvals, fraud flags, supplier risk scores, employee performance ratings, and government service eligibility. Each carries risk. None appear on the traditional internal audit radar.
| AI RISK CATEGORY | WHAT THIS MEANS FOR UAE ORGANIZATIONS |
| Model Bias & Discrimination | AI trained on historical data may perpetuate bias in lending, HR, or customer decisions — intersecting directly with Emiratisation commitments and UAE diversity obligations. |
| Algorithmic Drift | A model accurate at deployment degrades as data patterns change. Most organizations have zero audit programme for this. A ‘working’ model silently becomes a liability. |
| AI Hallucination | GenAI tools produce confident but factually wrong outputs. When used in audit work or financial analysis without oversight — unchecked AI outputs become organizational risk. |
| Shadow AI | Employees using unauthorized AI tools outside IT governance — uploading confidential data to consumer AI platforms. ISACA reports this is widespread and poorly controlled across GCC organizations. |
| Explainability Failures | DIFC and ADGM frameworks are moving toward mandatory explainability for AI decisions. A ‘black box’ that approves or denies is no longer acceptable in regulated sectors. |
| Deepfake & Synthetic Fraud | AI-generated audio, video, and documents bypassing authentication controls. US$25M Arup loss. US$2.6B in global BEC losses. A live threat in GCC financial environments. |
| Third-Party AI Risk | Organizations using vendor AI systems inherit those systems’ risks — including data privacy, model quality, and compliance risks they have not assessed. |
4.1 The Shadow AI Problem: The Risk Nobody Is Auditing
Shadow AI deserves special attention because it is simultaneously widespread, underestimated, and entirely within the traditional scope of internal audit — yet almost no organization has a formal audit programme for it.
Think: the finance analyst uploading confidential client data to ChatGPT to build a financial model. The HR manager using an AI hiring tool never reviews for bias. The legal team is drafting contracts with a GenAI tool that no one in compliance has approved. These are not hypothetical scenarios in UAE organizations — they are happening today, in your organization, right now.
The UAE Federal Data Protection Law (No. 45 of 2021) and DIFC Data Protection Law both create explicit obligations around automated processing of personal data. Shadow AI is a live, auditable, regulatory risk — if someone decides to scope it.
5 – The New IIA Standards 2025: Technology Is Now an Obligation
The 2024 IIA Global Internal Audit Standards, mandatory since January 9, 2025, represent the most significant update to the profession’s mandatory guidance since 2017. For the first time, technology is not a consideration — it is an obligation.
What the New IIA Global Standards Require
Standard 13.1 — Technology and Data Analytics: Internal audit must use technology where it enhances effectiveness and quality of audit work. This is no longer ‘consider’ — it is a requirement.
Standard 9.2 — Risk Assessment: Must include technology risks in the risk universe — explicitly encompassing AI-driven decision systems, algorithmic models, and emerging digital risks.
Standard 6.2 — Individual Competency: Auditors must maintain competencies commensurate with the risk environment, which in 2026 means AI literacy is a professional requirement, not an optional extra.
Bottom line: An internal audit function with no AI audit programme and no data analytics tools is no longer conformant with global best practice — and will face quality assessment findings to that effect.
6 – The AI Audit Maturity Model: Where Are You Today?
Wherever your function stands today, there is a clear next step. Here is a five-level roadmap for UAE audit functions building their AI capability:
L1 — AWARE: AI is on your radar as a risk and an opportunity. No tools deployed yet.
► First action: Conduct an AI inventory — map every system making or influencing decisions in your organization, including ones IT does not know about.
L2 — EXPLORING: Pilot analytics tools. AI added to risk universe. First AI-related observations issued.
► Key move: Add Shadow AI review to your IT general controls audit programme.
L3 — DEPLOYING: Continuous controls monitoring live. First formal AI model audit conducted.
► Key move: Run a bias assessment and model drift test on your highest-impact AI system.
L4 — INTEGRATED: AI fully integrated in planning, execution, and reporting. AI governance is a standing audit area.
► Key move: Deliver an AI Assurance Scorecard to the Audit Committee quarterly.
L5 — STRATEGIC: Internal audit shapes AI governance architecture organization-wide. CAE is a standing member of the AI governance committee.
► Outcome: The function defines the standard for AI governance — it does not just evaluate it after the fact.
6.1 Your 30-Day Quick-Start Plan
You do not need a transformation programme to begin with Dual Mandate. Four focused actions in 30 days:
WEEK 1: Map every AI/ML system in use across your organization. Include vendor-provided AI AND employee-used tools. Do not assume IT has a complete list. Shadow AI is, by definition, unregistered.
WEEK 2: Risk-rate each system: Impact (what decisions does it influence?) × Governance (how well is it controlled?). Prioritize high-impact, low-governance items for your next audit cycle.
WEEK 3: Conduct an honest AI competency assessment of your team. Identify the gap. The IIA, ISACA, and Coursera all offer AI audit training today. Start this week, not next quarter.
WEEK 4: Present your inventory and gap findings to the Audit Committee. Frame it as opportunity: the audit function that leads AI governance will be the most valued assurance partner in the organization.
7 – The Trusted Navigator in an Algorithmic World
Internal audit has always been the trusted navigator in the organizational vessel. The charts have changed. The currents are called machine learning, generative AI, agentic systems, and algorithmic accountability. The storms are named deepfake fraud, model bias, Shadow AI, and algorithmic accountability failure.
The UAE has made a national bet on artificial intelligence. Organizations that govern AI responsibly will win. And internal audit functions that develop the Dual Mandate capability will be the architects of that responsible governance.
The algorithm has entered the boardroom. It is time for the auditor to follow it in — not as a skeptic, but as a skilled, credible, and indispensable guide.
“The greatest danger in times of turbulence is not the turbulence – it is to act with yesterday’s logic.”
— Peter Drucker
About The Author
Ahsan Abdullah
CMA Ahsan Abdullah is the Managing Partner & Co-Founder of Tass & Hamjit Financial Advisory. A visionary in building technology and AI solutions within finance, investment, and compliance, Ahsan is committed to transforming SMEs into thriving, scalable businesses through smart financial and investment advisory.