Abstract:Artificial Intelligence (AI) has contributed significantly to societal progress, yet it also introduces a range of risks. As an emerging auditing paradigm, AI auditing plays a critical role in promoting responsible innovation in AI. Following the logical progression of “What is AI auditing? What to audit? How to audit?” this study clarifies the theoretical connotation of AI auditing, explores its accountability boundaries, and organizes the auditing subjects, principles, and implementation pathways to construct a comprehensive AI auditing framework. The study identifies three key findings. Firstly, AI auditing refers to “auditing AI” and serves as a mechanism for ensuring accountability in AI’s responsible innovation. Secondly, AI auditing focuses on the accountability and auditability of AI, clarifying the risks and accountability boundaries across the AI innovation chain. The auditing scope is centered on AI models, extending inward to data and algorithms and outward to products and ecosystems. Thirdly, AI auditing unfolds through three dimensions: auditing subjects, auditing principles, and implementation pathways. Internal auditing, social auditing, and national auditing each play distinct roles, addressing technical reliability, safety compliance, and ethical objectives across the entire process, from audit planning to execution and completion. By integrating the perspective of responsible AI innovation with the mechanisms of audit accountability, this study constructs a localized and contemporary AI auditing framework, contributing to the development of a uniquely Chinese independent auditing knowledge system.