Research on Exploring Fairness Challenges in Differential Privacy Machine Learning
DOI:
https://doi.org/10.56028/aetr.15.1.1574.2025Keywords:
Differential Privacy(DP); Machine Learning; Algorithmic Fairness; Differential Privacy Stochastic Gradient Descent; Privacy-Utility-Fairness Trade-off.Abstract
Differential Privacy (DP), the gold standard for privacy protection in machine learning, provides strict mathematical guarantees but also exhibits a "double-edged sword" effect by posing a potential threat to algorithmic fairness. This study systematically reveals that DP, particularly its mainstream implementation paradigm—Differentially Private Stochastic Gradient Descent (DP-SGD)—disproportionately impairs model performance on minority or underrepresented subgroups in its application, thereby exacerbating, rather than mitigating, existing algorithmic biases. This paper first elucidates the core concepts and mechanisms of DP, then analyzes the empirical evidence and intrinsic mechanisms leading to its disparate impact, identifying gradient clipping and noise injection as the key contributing factors. Building on this analysis, the paper comprehensively reviews various state-of-the-art techniques designed to mitigate this issue. It charts an evolutionary path from initial problem identification, through the remediation of existing mechanisms (e.g., group-wise and adaptive clipping), to the principled co-design of privacy and fairness (e.g., FairDP). This research aims to provide a comprehensive theoretical perspective and a technical roadmap for constructing trustworthy artificial intelligence systems that concurrently ensure privacy protection and social equity.