Published: 2026-04-10
Analysis of Gender Inequality in Artificial Intelligence-Based Recruitment Systems: A Systematic Literature Review (SLR)
DOI: 10.35870/ijsecs.v6i1.6746
Herdaning Sandra Kumalasari, Magdalena A. Ineke Pakereng
Article Metrics
- Scopus Citations
- Google Scholar
- Crossref Citations
- Semantic Scholar
- DataCite Metrics
-
If the link doesn't work, copy the DOI or article title for manual search (API Maintenance).
Abstract
The increasing adoption of Artificial Intelligence (AI) in recruitment has raised concerns about algorithmic discrimination that may disadvantage certain groups, particularly women. This study analyzed gender inequality in AI-based recruitment systems by synthesizing evidence from both technical and ethical perspectives. A Systematic Literature Review (SLR) was conducted on studies published between 2020 and 2025, applying predefined inclusion and exclusion criteria, followed by screening, quality assessment, and thematic synthesis. The review retained 10 studies (n = 10) that met the eligibility and quality threshold. Historically imbalanced training data emerged as the most frequently reported driver of gender bias, often producing unfair screening, ranking, and selection outcomes. Fairness conclusions were found to depend strongly on how recruitment outcomes were defined and measured, and prior studies consistently called for multiple fairness metrics supported by auditing practices. The literature also identified mitigation strategies spanning data balancing, fairness-aware model evaluation, transparency and audit mechanisms, and human oversight in decision-making. Gender bias in AI-based recruitment is, at its core, a socio-technical problem that requires combined interventions across data governance, model evaluation, and organizational accountability, while research gaps remain for future empirical validation and responsible AI deployment.
Keywords
Artificial Intelligence; Recruitment; Gender Bias; Systematic Literature Review; Algorithmic Fairness
Peer Review Process
This article has undergone a double-blind peer review process to ensure quality and impartiality.
Indexing Information
Discover where this journal is indexed at our indexing page.
Open Science Badges
This journal supports transparency in research and encourages authors to meet criteria for Open Science Badges.
How to Cite
Article Information
This article has been peer-reviewed and published in the International Journal Software Engineering and Computer Science (IJSECS). The content is available under the terms of the Creative Commons Attribution 4.0 International License.
-
Issue: Vol. 6 No. 1 (2026)
-
Section: Articles
-
Published: 2026-04-10
-
License: CC BY 4.0
-
Copyright: © 2026 Authors
-
DOI: 10.35870/ijsecs.v6i1.6746
AI Research Hub
This article is indexed and available through various AI-powered research tools and citation platforms. Our AI Research Hub ensures that scholarly work is discoverable, accessible, and easily integrated into the global research ecosystem.
Herdaning Sandra Kumalasari, Satya Wacana Christian University
Informatics Engineering Study Program, Faculty of Information Technology, Universitas Kristen Satya Wacana, Salatiga City, Central Java Province, Indonesia
-
Carrera-Rivera, A., Ochoa, W., Larrinaga, F., & Lasa, G. (2022). How-to conduct a systematic literature review: A quick guide for computer science research. MethodsX, 9, 101895. https://doi.org/10.1016/j.mex.2022.101895
-
Chaturvedi, S., & Chaturvedi, R. (2025). Who gets the callback? Generative AI and gender bias (arXiv:2504.21400). arXiv. https://doi.org/10.48550/arXiv.2504.21400
-
Cowgill, B. (2019). Bias and productivity in humans and machines: Theory and evidence from résumé screening (Working Paper). W. E. Upjohn Institute for Employment Research. https://doi.org/10.2139/ssrn.3433737
-
Dablain, D., Krawczyk, B., & Chawla, N. V. (2024). Towards a holistic view of bias in machine learning: Bridging algorithmic fairness and imbalanced learning. Machine Learning and AI, 2(1), 1–20. https://doi.org/10.1007/s44248-024-00007-1
-
De Lima, R. M., Corrêa, V., & Pisker, B. (2023). Gender bias in artificial intelligence: A systematic review of the literature. SN Computer Science, 4(1), 1–18. https://doi.org/10.1007/s42979-022-01377-6
-
Deloitte. (2023). Scaling AI across talent management in financial services organizations. https://www.deloitte.com
-
Fihris, Alfianika, N., & Nasikhin, N. (2024). Differences in male and female responses to artificial intelligence integration for education faculty: Study of Thailand international students at Islamic universities in Indonesia. eL-HIKMAH: Jurnal Kajian dan Penelitian Pendidikan Islam, 18(1), 31–60. https://doi.org/10.20414/elhikmah.v18i1.10037
-
Firdaus, A. (2024). Implementasi artificial intelligence dalam rekrutmen: Manfaat dan tantangan di industri 4.0. J-MAS (Jurnal Manajemen dan Sains), 9(2), 1615. https://doi.org/10.33087/jmas.v9i2.2083
-
Iwan, C., Putra, C. K., Zabdi, D., Boy, E. I., Chandra, M. A., & Febrianti, L. Y. (2023). Analisis pemanfaatan artificial intelligence dalam membantu proses perekrutan karyawan perusahaan. Jurnal Sains dan Teknologi, 2(2), 161–168. https://doi.org/10.58169/saintek.v2i2.248
-
Kassir, S., Baker, L., Dolphin, J., & Polli, F. (2023). AI for hiring in context: A perspective on overcoming the unique challenges of employment research to mitigate disparate impact. AI and Ethics, 3(2), 199–214. https://doi.org/10.1007/s43681-022-00173-9
-
Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795–848. https://doi.org/10.1007/s40685-020-00134-w
-
Kristian, W. (2024, August). AI dalam rekrutmen: Dari sumber daya manusia ke robot pencari kerja. BINUS University. https://binus.ac.id/bekasi/2024/08/ai-dalam-rekrutmen-dari-sumber-daya-manusia-ke-robot-pencari-kerja/
-
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
-
Meisda, R., & Moedjahedy, J. (2024). Sentiment analysis of the most viewed YouTube video: Exploring gender bias in the discussion of women workers in Indonesia. YUME: Journal of Management, 7(1), 186–197. https://doi.org/10.37531/yum.v7i1.6311
-
Montesa, M. (2025). AI recruiting in 2025: The definitive guide. Phenom. https://www.phenom.com/blog/recruiting-ai-guide
-
Mori, M., Sassetti, S., Cavaliere, V., & Bonti, M. (2024). A systematic literature review on artificial intelligence in recruiting and selection: A matter of ethics. Personnel Review. https://doi.org/10.1108/PR-03-2023-0257
-
Mujtaba, D. F., & Mahapatra, N. R. (2024). Fairness in AI-driven recruitment: Challenges, metrics, methods, and future directions (arXiv:2405.19699). arXiv. https://doi.org/10.48550/arXiv.2405.19699
-
-
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency (pp. 469–481). ACM. https://doi.org/10.1145/3351095.337282
-
Shenbhagavadivu, T., Poduval, K., & V. V. (2024). Artificial intelligence in human resource: The key to successful recruiting and performance management. ShodhKosh Journal of Visual and Performing Arts, 5(3). https://doi.org/10.29121/shodhkosh.v5.i3.2024.1351
-
Singh, R. (2025, January). How AI-driven chatbots enhance candidate experience in 2025 [LinkedIn post]. LinkedIn. https://www.linkedin.com
-
Soleimani, M., Intezari, A., Arrowsmith, J., Pauleen, D. J., & Taskin, N. (2025). Reducing AI bias in recruitment and selection: An integrative grounded approach. The International Journal of Human Resource Management. https://doi.org/10.1080/09585192.2025.2480617
-
Swaroop, N. (2025). The bias detection and fairness audits in AI recruitment tools. International Journal of Modern Science and Research Technology (IJMSRT), 3(4), 323–329. https://ijmsrt.com/articles/view/the-bias-detection-and-fairness-audits-in-ai-recruitment-tools
-
Tronnier, F., Löbner, S., Azanbayev, A., & Walter, M. L. (2024). A systematic literature review on gender bias in AI — Towards inclusiveness in machine learning. In Proceedings of the Pacific Asia Conference on Information Systems (PACIS). AIS Electronic Library. https://aisel.aisnet.org/pacis2024/track01_aibussoc/track01_aibussoc/3
-
Zhou, Y., Kantarcioglu, M., & Clifton, C. (2023). On improving fairness of AI models with synthetic minority oversampling techniques. In Proceedings of the 2023 SIAM International Conference on Data Mining (SDM) (pp. 1134–1145). SIAM. https://doi.org/10.1137/1.9781611977653.ch98
-

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
1. Copyright Retention and Open Access License
Authors retain copyright of their work and grant the journal non-exclusive right of first publication under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
This license allows unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2. Rights Granted Under CC BY 4.0
Under this license, readers are free to:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material for any purpose, including commercial use
- No additional restrictions — the licensor cannot revoke these freedoms as long as license terms are followed
3. Attribution Requirements
All uses must include:
- Proper citation of the original work
- Link to the Creative Commons license
- Indication if changes were made to the original work
- No suggestion that the licensor endorses the user or their use
4. Additional Distribution Rights
Authors may:
- Deposit the published version in institutional repositories
- Share through academic social networks
- Include in books, monographs, or other publications
- Post on personal or institutional websites
Requirement: All additional distributions must maintain the CC BY 4.0 license and proper attribution.
5. Self-Archiving and Pre-Print Sharing
Authors are encouraged to:
- Share pre-prints and post-prints online
- Deposit in subject-specific repositories (e.g., arXiv, bioRxiv)
- Engage in scholarly communication throughout the publication process
6. Open Access Commitment
This journal provides immediate open access to all content, supporting the global exchange of knowledge without financial, legal, or technical barriers.