Dallas Bishoff Dallas Bishoff

Privacy in the World of ISO 42001

Privacy in the World of ISO 42001

Artificial intelligence handles a wide array of different types of data, under different uses cases, industry sectors with different considerations, and across diverse regulatory and legal jurisdictions. This post is focused on privacy data, and some of the ISO standards that enable responsible privacy management in ways that you may not understand until you have already broken the rules with your AI system, putting your organization at risk.

PRO TIP: ISO 42001, Appendix D.2 Integration of AI Management System with Other Management Systems identifies that an ISO 27701 Privacy Information Management Systems can be implemented with ISO 42001 as an integrated management system. However, you do not have to formally implement ISO 27701 to leverage the Annex A privacy controls that can help you protect privacy data.

ISO 29100:2024 Privacy Framework

First published in 2011, amended slightly in 2018, and updated with minor revisions in 2024, this standard addresses areas like privacy actors, policies, use of privacy controls, and publishes the eleven (11) Privacy Principles (Clause 6). Many of the principles exist in national laws.

PRO TIP: The Privacy Principles in ISO 29100 can help you formulate and de-conflict your Responsible AI principles. It is the smart thing to do. There are various ISO 42001 Annex A controls that require you to understand, implement, and monitor your Responsible AI objectives which may overlap with the privacy management domain, to include:A.6.1.2 Objectives for Responsible Development of AI System; A.6.1.3 Processes for Responsible AI System Design and Development; A.9.2 Processes for Responsible Use of AI Systems; A.9.3 Objectives for Responsible Use of AI System.

ISO 27701:2025 Privacy Information Management Systems – Requirements and Guidance

The Privacy Information Management System (PIMS) was just updated a couple of weeks ago, and the new controls defined in Annex A can help your organization mitigate the risks to privacy data in AI systems. This includes controls for PII Controllers, controls for PII Processors, and a new representation of the privacy control extensions to the ISO 27002:2022 Annex A control set.

PRO TIP: Make sure you understand the degree to which your AIMS scope aligns with your PIMS scope. Privacy systems, like an HR platform, may be in your ISO 27701 scope, but may not be included within your AIMS scope. Also, your organization should consider how you will evidence privacy management in the AIMS Annex A controls, to include A.2.3 Alignment with Other Organizational Policies; A.5.4 Assessing AI System Impact on Individuals or Groups of Individuals (See also ISO 42005).

ISO 27018:2025 Guidelines for Protection of Personally Identifiable Information (PII) in Public Clouds Acting as PII Processors

Most AI systems are implemented in cloud environments. This standard was recently updated and aligns with the ISO 27002 Annex A controls. Like ISO 27017, which addresses cloud security, ISO 27018 is focused on privacy in cloud service environments. This standard is focused on defining the PII Controller and PII Processor roles. Of note, pay attention to Annex A – Public Cloud PII Processor Extended Control Set for PII Protection.

PRO TIP: Control implementations in a cloud services environment can be very different. Controls can be bifurcated through the shared responsibility model. Examples of controls that are supplemented include A.5.26 Response to Information Security Incidents Privacy; A.8.13 Information Backup; A.8.15 Logging; A.8.24 Use of Cryptography.

ISO 27557:2022 Application of ISO 31000:2018 for Organizational Privacy Risk Management

ISO 31000 :2018 is the foundational risk management process model used by other ISO risk standards, to include ISO 23894 :2023 Artificial Intelligence - Guidance on Risk Management. This standard address risk assessments (identify, analyze, evaluate), risk treatment, monitoring and review, along with recording and reporting.

PRO TIP:  Pay particular attention to each of the four annex sections. Annex A helps you with PII processing identification; Annex B provides example privacy events and causes; Annex C will be particularly applicable to your AI System Impact Assessment (ISO 42005); while Annex D provides a representative severity scale for privacy impacts on individuals.

ISO 27563:2023 Security and Privacy in Artificial Intelligence Use Cases – Best Practices

This publication supplements ISO 24030:2024 Artificial Intelligence (AI) – Use Cases, which has a collection of use cases listed in Clause 6 Use Cases, which are generalized, and Clause 7 Use Cases Summaries which span 18 industry sectors.

PRO TIP: Use cases change how privacy data is collected, how it will be processed, prospective threats, and how to protect the use case. Of note, Annex A extends the ISO 24030 collection and analysis of use cases.

ISO 27091 (DIS) Cybersecurity and Privacy – Artificial Intelligence – Privacy Protection

My access to this publication is based on ISO Steering Committee 27 (SC 27) Information Security, Cybersecurity, and Privacy Protection membership. ISO 27091 exists as a Draft International Standard (DIS). Pending voting within ISO, it may be made available soon for public comment. At a high level, this publication addresses privacy threats in AI models, privacy risks within AI systems, and how to address privacy engineering in the AI system life cycle (See ISO 5338).

PRO TIP: When available, you should use this standard with ISO 27557 as you develop your Privacy Risk Assessment or update any existing Privacy Risk Assessment to account for use within an AI System.  (See ISO 27701, Clause 8.2 Privacy Risk Assessment. Also see, ISO 42001, Clause 8.2 AI Risk Assessment and 8.4 AI System Impact Assessment)

Read More
Dallas Bishoff Dallas Bishoff

ISO Steering Committee 42 - AI

ISO 42005 AI System Impact Assessments

ISO Steering Committee 42 is responsible for all things artificial intelligence (AI), and maintains an ambitious array of AI standards, with more on the way. While there are other ISO committees that have, and will publish AI related content, SC 42 is charged with leading the way within ISO.

Here are some of the key standards which are essential for those attempting to implement an ISO 42001 Artificial Intelligence Management System (AIMS). The full list of SC42 standards can be found here.

ISO 5338:2023 Artificial Intelligence - AI System Life Cycle Process

ISO 8183:2023 Artificial Intelligence - Data Life Cycle Framework

ISO 22989:2022 Artificial Intelligence - Artificial Intelligence Concepts and Terminology

ISO 23894:2023 Artificial Intelligence - Guidance on Risk Management

ISO 24030:2024 Artificial Intelligence - Use Cases

ISO 24368:2022 Artificial Intelligence - Overview of Ethical and Societal Concerns

ISO 38507:2022 Governance of IT - Governance Implications of the Use of Artificial Intelligence by Organizations

ISO 42005:2025 Artificial Intelligence- AI System Impact Assessment

NOTE: PROCESS 360 is a voting member of SC42, as a member of the U.S. Technical Advisory Group (TAG), and participates on the SC42 working groups (WGs), excepts for WG 4 Uses Cases and Applications:

WG 1:  Foundational Standards

WG 2: Data

WG 3: Trustworthiness

WG 4: Use Cases and Applications

WG 5: Computational Approaches

Read More
Dallas Bishoff Dallas Bishoff

ISO 42005 AI System Impact Assessments

ISO Steering Committee 42

In 2025 ISO published ISO 42001 AI System Impacts Assessment, which aligns with ISO 42001:2023, Clause 6.1.4 and 8.4, also named AI System Impact Assessment. This impact assessment is unique to ISO 42001, and is intended to address the risks that may affect individuals, groups, or society at large. The standard supports transparency, accountability and trust in AI by helping organizations identify, evaluate and document potential impacts throughout the AI system lifecycle.

The output of an AI System Impact Assessment is used as an input to the AI Risk Assessment process, and can determine necessary AI controls as part of the AI Risk Treatment process, though risk avoidance, risk mitigation, risk transfer, and risk acceptance. The AI System Impact Assessment also intends to identify benefits to individuals, groups of individuals, and to societies, which the organization should take action to protect.

Read More