Assessing Algorithmic Fairness and Bias in Predictive Social Data Science Models in German Public Administration
Abstract
This study investigates the potential for algorithmic bias in Machine Learning (ML) systems deployed within German public administration, specifically in social resource allocation and urban planning decision support, employing a Computational Fairness Audit methodology that integrates statistical bias measurement with sociologically grounded fairness theory. The research operates within the layered regulatory environment of the European Union's Artificial Intelligence Act, the General Data Protection Regulation, and the German Sozialgesetzbuch, constructing an audit protocol that is simultaneously technically rigorous, legally compliant, and sociologically informed. Using simulated administrative datasets and pseudo-anonymised historical records compiled in compliance with German federal data protection law, the study operationalises six socially grounded fairness metrics applied to three ML model architectures: logistic regression, gradient boosting, and a deep neural network, trained on tasks representative of Jobcenter benefit allocation scoring and municipal social housing prioritisation. Statistical measurement of implicit discrimination against minority and socially vulnerable population groups defined with reference to the protected characteristics enumerated in the Allgemeines Gleichbehandlungsgesetz reveals consistent and statistically significant fairness metric violations across all three model architectures, with gradient boosting and deep neural network models demonstrating substantially higher demographic parity gaps against Turkish-German, refugee-background, and single-parent household population subgroups than against the majority population reference group. The audit further reveals that model accuracy, as measured by standard classification performance metrics, is inversely correlated with fairness metric performance across protected subgroups. This finding directly challenges the technically unwarranted assumption prevalent in German public-sector AI procurement that high model accuracy implies equitable decision outcomes. The study contributes the Socially Grounded Algorithmic Audit Framework (SGAAF) as a replicable, legally anchored protocol for the pre-deployment and in-deployment assessment of ML systems in German public administration contexts.
Full Text:
PDFReferences
Antidiskriminierungsstelle des Bundes (ADS). (2022). Diskriminierung im Bereich des Sozialrechts: Ergebnisse des Monitorings 2020-2022. ADS.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There's software used across the country to predict future criminals. And it's biased against blacks. ProPublica.
Barocas, S., & Hardt, M. (2017). Fairness in machine learning—proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS).
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. fairmlbook.org.
Bundesagentur für Arbeit. (2023). Statistik der Grundsicherung für Arbeitsuchende nach dem SGB II: Jahresbericht 2022. Bundesagentur für Arbeit.
Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785-794. https://doi.org/10.1145/2939672.2939785
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153-163. https://doi.org/10.1089/big.2016.0047
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1). https://doi.org/10.1126/sciadv.aao5580
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214-226. https://doi.org/10.1145/2090236.2090255
Dworkin, R. (1977). Taking rights seriously. Harvard University Press.
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor—St Martin's Press.
European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689.
Kuhlmann, S., & Heuberger, M. (2021). Digitalisation of public administration in Germany: Characteristics, challenges and outlook. Public Management Review, 23(1), 1-25. https://doi.org/10.1080/14719037.2021.1872916
Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. Advances in Neural Information Processing Systems, 30, 4066-4076.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1-35. https://doi.org/10.1145/3457607
Sweeney, L. (2002). k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(5), 557-570. https://doi.org/10.1142/S0218488502001648
DOI: https://doi.org/10.51817/jas.v7i1.446
Refbacks
- There are currently no refbacks.

This work is licensed under a Creative Commons Attribution 4.0 International License.
















