Abstract:As algorithmic systems become deeply embedded in social life, algorithmic risks and harms have become increasingly prominent, making the establishment of effective accountability mechanisms an urgent requirement for algorithmic governance. The algorithmic audit accountability system aims to comprehensively evaluate algorithmic technical risks and social impacts, enabling effective supervision of algorithm operators. This system comprises three core components: a subject framework consisting of audit executors, auditees, and report users; an ethical standard system forming multi-level regulations across international, national, industrial, and algorithmic operator levels; and a responsibility mechanism encompassing compliance obligations, auditability requirements, and liability determination with corrective mechanisms. From the perspective of socio-technical systems theory, algorithmic audits should transcend purely technical perspectives, examining the interactions between technical implementation and social impacts, forming an accountability system oriented toward the general public. The Administrative Measures for Personal Information Protection Compliance Audits provide preliminary legal basis for this system, but specialized legislation such as a dedicated Artificial Intelligence Law is still needed to systematically regulate algorithmic audit accountability, ensuring transparent, fair, and responsible operation of algorithmic systems.