ISO 9001 meets ISO/IEC 42001: governing AI inside your QMS
AI is already making decisions inside your processes — vendor scoring, defect classification, customer triage. ISO/IEC 42001 gives you a way to govern it, and ISO 9001:2026 expects you to.
AI has quietly become part of the operating model in most mid-sized organizations. A model triages incoming support tickets. Another scores supplier risk. A vision system flags surface defects on the line. None of these were procured as 'AI projects' — they arrived inside SaaS tools, embedded in features, often without the quality team being told.
ISO/IEC 42001, published in 2023, is the first management system standard for artificial intelligence. Built on the same Harmonized Structure as ISO 9001, it slots into an existing QMS without forcing a parallel system. The 2026 revision of ISO 9001 explicitly anticipates this — process control under clause 8 now has to acknowledge that some of those processes are running on models, not procedures.
What ISO/IEC 42001 actually adds
- An AI inventory — every model in use, what it decides, who owns it
- Risk and impact assessments specific to AI systems (bias, drift, explainability)
- Lifecycle controls from data sourcing through deployment to decommissioning
- Human oversight requirements proportionate to the decision being automated
- Incident handling for AI-specific failure modes
Where it intersects with ISO 9001
Clause 4 — Context
The use of AI by you, your suppliers, and your customers is now a relevant context factor. Pretending it is not creates an audit-visible gap.
Clause 7.1.6 — Organizational knowledge
A model trained on internal data is organizational knowledge. So is the prompt library your team has built. So is the dataset used to validate a vision system. None of this was on anyone's radar in 2015.
Clause 8.4 — Externally provided processes
If a SaaS vendor uses AI inside a service you depend on — fraud detection, content moderation, document classification — you are consuming an externally provided process. The standard expects you to control it.
Clause 9 — Performance evaluation
Model drift is a quality problem. A classifier that was 96% accurate at deployment and is now 89% has degraded a process. Without monitoring, you will discover this from a customer complaint, not a dashboard.
How to integrate without doubling the work
- Add an AI register to your existing documented information set
- Extend your supplier evaluation criteria to cover AI use disclosed by vendors
- Treat model changes the same way you treat process changes — change control applies
- Bring AI incidents into your existing nonconformity and corrective action workflow
- Train internal auditors on the basics of model risk; they do not need to be data scientists
“You do not need a separate AI management system. You need your QMS to admit that AI is inside it.”