AI Governance and Compliance Auditor (AIGCA)
CPGA’s new AI Governance and Compliance Auditor (AIGCA) credential builds on existing audit and governance qualifications by specializing in AI systems. This advanced certification targets experienced finance, audit, and IT professionals (CPGA, CMGA, CPA, ACCA, CISA, CISM, CIA etc.) who must integrate AI into their risk and audit practices. In today’s data-driven world, auditing AI models and algorithms is critical. AIGCA addresses this demand with a broader, more rigorous syllabus than prior programs, emphasizing practical skills, ethics, and the latest AI governance standards.
Artificial intelligence and digital transformation demand that auditors synthesize technology, data, and ethics
A recent conceptual framework shows how “Digital Transformation” enables AI and other innovations, but Ethical Considerations (at center) must guide auditing practice. The AIGCA curriculum is designed accordingly: it teaches not only how to use AI auditing tools but also how to align AI projects with governance, compliance, and trust frameworks. By covering emerging topics like AI system impact assessments and ethics, AIGCA ensures auditors are prepared to oversee AI responsibly.
Target Audience & Entry Requirements
Qualification: Mid-career professionals with a strong audit/governance background. Examples include CPGA-certified auditors, chartered accountants (ACCA, ACA, CPA), internal audit professionals (CIA), or IT auditors (CISA/CISM) who are already practicing in risk, finance, IT or compliance roles.
Experience: Typically 3–5 years of professional experience in auditing, risk management, IT governance, or related fields. At least 1–2 years should involve projects with data analytics or automated systems.
Application: Candidates submit proof of qualification and experience. CPGA reserves the right to audit documentation.
Certification Domains & Syllabus
The AIGCA exam covers four broad domains. Each domain includes multiple topics and subtopics, ensuring a practical and in-depth understanding of AI audit and governance:
practical and in-depth understanding of AI audit and governance:
- Domain 1: AI Governance, Strategy & Risk Management (≈30–35%)
- AI Governance Models: Organizational structures for AI oversight (AI steering committees, ethics boards). Development of an AI strategy aligned with business goals.
- Standards & Frameworks: Key AI governance standards (ISO/IEC 42001, IEC 38507) and frameworks (NIST AI RMF) for managing AI risk
- Policy, Roles & Accountability: Defining AI-related policies, responsibilities and ownership of AI risk. Compliance with data protection laws (GDPR, UK DPA) in AI contexts.
- Ethics & Trustworthy AI: Concepts of fairness, explainability, transparency. Ethical AI principles (e.g. OECD Guidelines, IEEE/EU ethics guidelines) and how they impact auditing. Techniques for evaluating algorithmic bias and social impact.
- Risk Identification & Assessment: Identifying AI-specific risks (bias, model drift, vendor risk). Conducting AI risk assessments and integrating them into enterprise risk management.
- Data Governance: Ensuring AI training data quality, privacy, and lineage. Controls for sensitive data, synthetic data generation, and data lifecycle in AI projects.
- Domain 2: AI Technology, Development & Operations (≈30–35%)
- AI/ML Fundamentals: Types of AI (ML algorithms, neural networks, deep learning, NLP, computer vision). Overview of development tools and platforms.
- Data Management for AI: Processes for data collection, labeling, cleansing. Ensuring data security, confidentiality and balancing for training (handling scarcity or bias).
- AI Solution Development Lifecycle (MLOps): AI project lifecycles from design through deployment: data engineering, model training/validation, version control. Privacy/Security by Design in AI development.
- Deployment & Monitoring: Methods for deploying AI systems (cloud, edge). Monitoring model performance and drift over time. Change management for AI updates and patching.
- AI Security & Resilience: AI-specific threats (adversarial attacks, model inversion, poisoning) and controls (robustness testing, anomaly detection). Integrating AI into cybersecurity programs.
- Emerging AI Technologies: Practical knowledge of generative AI (LLMs, transformers) and other new AI paradigms. Awareness of applications (e.g. AI in finance, healthcare) and their audit implications.
- Domain 3: AI Audit & Assurance Techniques (≈25–30%)
- Audit Planning & Scoping: Defining AI audit objectives. Identification of AI “assets” (models, data pipelines, algorithms) and key control points. Risk-based scoping of AI projects.
- Controls Assessment: Reviewing controls around AI systems: logical access to data/models, change management, governance over algorithms. Auditing AI system access, source code, and development practices.
- Audit Evidence & Testing: Designing tests for AI systems (e.g. test data validation, output verification). Using sample data to test model accuracy and fairness. Employing statistical and data-analytic techniques to gather evidence (e.g. anomaly detection in outputs).
- Audit Data Analytics: Leveraging analytics and AI tools in the audit (e.g. using Python/R for data analysis, applying ML to detect fraud patterns). Understanding the auditor’s use of AI (e.g. for audit sampling or document review).
- Tools for AI Auditors: Familiarity with AI audit tools and libraries (e.g. TensorFlow’s explainability APIs, model interpretability toolkits) and general data analytics software.
- Reporting & Follow-up: Communicating AI audit findings (including technical results on bias or performance) to stakeholders. Recommending remediation for AI governance gaps. Ensuring follow-up on AI audit recommendations.
- Domain 4: Ethics, Compliance & Emerging Issues (≈10–15%)
- Regulatory Compliance: High-risk AI regulatory requirements (EU AI Act categories, UK/US guidance). Auditing compliance with AI regulations, privacy laws, industry-specific rules (finance, healthcare, etc.).
- Ethical AI Frameworks: Audit of ethical considerations – e.g. evaluating model explainability (XAI techniques like LIME/SHAP) and accountability. Checking alignment with corporate social responsibility (see ISO 26000) and ethical codes.
- Third-Party and Supply Chain Risk: Managing and auditing risks from AI vendors, open-source models, or data suppliers. Ensuring vendor AI services meet governance standards.
- Workforce and Change Management: Assessing how AI adoption impacts people and processes. Evaluating training programs and governance structures to support AI literacy and change.
- Generative AI and Future Trends: Special considerations for generative AI (prompt engineering, hallucinations, IP issues) and upcoming tech trends. (Notably, NIST’s new Generative AI Risk Profile highlights these unique challenges)
- Practical Case Studies: Applied scenarios (e.g. auditing an AI credit decision system, or an AI-powered cybersecurity tool). CAGAP emphasizes case-based learning and may include a capstone project or simulation.
Each domain will be tested through scenario-based multiple-choice and case questions. The syllabus content aligns with industry guidance (e.g. ISO 42001’s AI governance requirements and NIST’s focus on trustworthy AI).
Exam Structure, Format & Maintenance
The AIGCA exam is a computer-based certification test. It will consist of roughly 100–120 multiple-choice questions with real-world scenarios, covering all four domains. Candidates have about 3 hours to complete the exam. A scaled score (60%) is required to pass. CPGA offers exam sessions at accredited test centers and via remote proctoring.
- Prep Materials: CPGA will provide an official review guide and course. Practice questions and case study materials will be available (similar to ISACA’s AAIA Q&A database).
- CPE/Credential Maintenance: Certified professionals must complete annual Continuing Professional Education (CPE) hours focused on AI and audit (e.g. 20 CPD hours per year, with at least some hours in AI topics). These may overlap with other CPGA certification renewal requirements.
- Code of Ethics: Like all CPGA qualifications, AIGCA holders must adhere to CPGA’s ethical standards in all AI audit work.
Career Impact and Market Value
The AIGCA credential positions professionals at the forefront of a rapidly expanding field. Organizations deploying AI need auditors and risk managers who understand these systems end-to-end. A recent survey found ~85–94% of IT audit and risk professionals believe AI skills will be important for their careers. In practice, AI-trained auditors command premium salaries (average US ~$73K, with senior roles up to $120K+), and can work across sectors (finance, tech, healthcare, public sector) where AI governance is critical. The AIGCA qualification builds on a candidate’s professional accounting/ auditing/ IS Audit background and enables mid-career auditors to transition into these high-demand roles.
Get Started Today
Follow these simple steps to enroll in your desired certification:
- Choose Your Certification
- Check Eligibility
- Complete Application
- Submit Payment
- Start Learning
