TIST Journal 2026 Journal Article
FedPRS: A Privacy-preserving Representation Synthesis Framework for Federated Contribution Evaluation
- Yuwei Fan
- Yuan Yao
- Wei Xi
- Quan Zhao
- Zelei Liu
- Lixin Fan
- Qiang Yang
- Jian Jin
Federated Learning (FL) enables the collaborative training of a global model while protecting participants’ privacy. Evaluating each participant’s contribution is essential to providing a high-quality model, ensuring fairness, and mitigating potential biases. Most existing contribution evaluation approaches for FL assume that the server has a public validation dataset. However, it is almost impossible to obtain a validation dataset due to privacy concerns. In this article, we propose a Federated Privacy-preserving Representation Synthesis (FedPRS) framework to synthesize a validation dataset for contribution evaluation. The proposed FedPRS framework first transforms each participant’s private validation dataset into its representation. Then, a random-region desensitization strategy is developed to further desensitize the dataset without compromising its utility. The desensitized representation dataset of each participant is collected by the server to evaluate federated contribution, which considers both equity and privacy protection. Moreover, we instantiate and integrate three specific contribution evaluation approaches in this framework. We perform experiments on various FL settings, including independently identically distributed (IID) and non-IID data distributions. Experimental results demonstrate that the contribution evaluation results obtained using the validation dataset synthesized by the FedPRS framework are closely aligned with those obtained using a real, private validation dataset.