ISO Steering Committee 42 - AI
ISO 42005 AI System Impact Assessments
ISO Steering Committee 42 is responsible for all things artificial intelligence (AI), and maintains an ambitious array of AI standards, with more on the way. While there are other ISO committees that have, and will publish AI related content, SC 42 is charged with leading the way within ISO.
Here are some of the key standards which are essential for those attempting to implement an ISO 42001 Artificial Intelligence Management System (AIMS). The full list of SC42 standards can be found here.
ISO 5338:2023 Artificial Intelligence - AI System Life Cycle Process
ISO 8183:2023 Artificial Intelligence - Data Life Cycle Framework
ISO 22989:2022 Artificial Intelligence - Artificial Intelligence Concepts and Terminology
ISO 23894:2023 Artificial Intelligence - Guidance on Risk Management
ISO 24030:2024 Artificial Intelligence - Use Cases
ISO 24368:2022 Artificial Intelligence - Overview of Ethical and Societal Concerns
ISO 38507:2022 Governance of IT - Governance Implications of the Use of Artificial Intelligence by Organizations
ISO 42005:2025 Artificial Intelligence- AI System Impact Assessment
NOTE: PROCESS 360 is a voting member of SC42, as a member of the U.S. Technical Advisory Group (TAG), and participates on the SC42 working groups (WGs), excepts for WG 4 Uses Cases and Applications:
WG 1: Foundational Standards
WG 2: Data
WG 3: Trustworthiness
WG 4: Use Cases and Applications
WG 5: Computational Approaches
ISO 42005 AI System Impact Assessments
ISO Steering Committee 42
In 2025 ISO published ISO 42001 AI System Impacts Assessment, which aligns with ISO 42001:2023, Clause 6.1.4 and 8.4, also named AI System Impact Assessment. This impact assessment is unique to ISO 42001, and is intended to address the risks that may affect individuals, groups, or society at large. The standard supports transparency, accountability and trust in AI by helping organizations identify, evaluate and document potential impacts throughout the AI system lifecycle.
The output of an AI System Impact Assessment is used as an input to the AI Risk Assessment process, and can determine necessary AI controls as part of the AI Risk Treatment process, though risk avoidance, risk mitigation, risk transfer, and risk acceptance. The AI System Impact Assessment also intends to identify benefits to individuals, groups of individuals, and to societies, which the organization should take action to protect.