Capacity-Building Activity | Estonia
The BIAS Project’s 2nd capacity-building activity in Tallinn, Estonia, took place on October 6th, 2025. The event, organised by partner Digiotouch, brought together professionals from HR and AI sectors to address one of the most pressing challenges in contemporary recruitment: algorithmic bias in artificial intelligence (AI). Drawing from interactive presentations, real-world studies, and collaborative dialogue, the session offered deep insights into the impacts of AI bias and the strategies needed for ethical, inclusive hiring.

Training Objectives and Outcomes
Throughout the session, the agenda focused on equipping attendees with practical knowledge:
-
Understanding the dynamics and sources of bias in AI.
-
Gaining foundational knowledge of artificial intelligence and its applications in HR.
-
Exploring the ethical consequences of AI-driven recruitment.
-
Learning to navigate and implement legal frameworks to mitigate bias.
-
Developing critical thinking to challenge and counteract discriminatory system outcomes.
-
Building peer networks for continued learning and exchange.
By grounding the training in interactive, scenario-based exercises and fostering open discussion, the session succeeded in transforming theoretical concerns into applicable strategies. Attendees accessed a peer-to-peer learning environment, enabling knowledge sharing across diverse roles, from HR professionals to NGO representatives.
Exploring AI, Bias, and Discrimination
The session (conducted by BIAS partner Smart Venice) opened with a critical look at how discrimination and bias manifest in AI recruitment tools. One of the focal presentations showcased experimental data using OpenAI's ChatGPT-3.5, illustrating clear disparities in the treatment of candidates based on attributes like race and gender. The data highlighted both worse and better treatment outcomes for various demographic groups, sparking meaningful discussion around the tangible impacts of algorithmic decision-making in hiring processes. Participants were able to see concrete evidence of how even advanced AI can reinforce systemic biases if not vigilantly monitored and corrected.
.jpeg)
Intersectionality: A Key Analytical Lens
Participants were also introduced to the concept of intersectionality, drawing on Kimberlé Williams Crenshaw’s foundational definition. Through visual tools such as the "intersectional identities" and "identity wheel" graphics, attendees reflected on the complex interplay between race, gender, disability, class, and more. These frameworks helped foster a nuanced understanding that bias in AI is rarely a matter of a single attribute; rather, multiple aspects of identity intersect to shape people’s experiences and the risks of discrimination they face.

.jpeg)
Legal Frameworks and the EU AI Act
One key outcome of the session was enhanced awareness of the legal frameworks guiding responsible AI use. A dedicated segment on the EU AI Act detailed its risk-based approach, distinguishing between prohibited, high risk, potential high risk, and minimal risk applications. Participants learned how legal standards like the GDPR and the AI Act can be operationalized to prevent and respond to discrimination, reinforcing the need for both compliance and proactive advocacy in their organizations.
Studies and Tools for Measuring Bias
Presenters highlighted essential research, such as Joy Buolamwini’s influential "Gender Shades" study, which revealed sharp disparities in gender classifier accuracy across demographic groups. Participants discussed real-life consequences, such as lower recognition accuracy for darker-skinned females in commercial facial recognition systems. They were introduced to technical concepts like the Word Embedding Association Test (WEAT), which measures the degree of bias in word embeddings and how these subtleties in language models can perpetuate harmful stereotypes.
Impact on Participants and the Project
The impact of the session extended beyond improved technical literacy. Many participants expressed a renewed commitment to championing fairness, diversity, and inclusion in their organizations. The hands-on approach (combining data-driven presentations, reflection tools, and legal guidelines) allowed for immediate assimilation of best practices.
A particularly meaningful exercise involved participants analyzing CVs as if they were part of a real hiring process, then comparing their collective decisions with the selections made by a Blackbox AI recruitment system. This comparison illustrated not only the divergences in candidate profiles chosen by humans versus AI but also prompted critical discussion on why the AI prioritized certain profiles, revealing underlying biases encoded in the system. This exercise reinforced the importance of human oversight and the need to continuously audit AI tools for fairness.
This activity empowered organizations to become stronger advocates for ethical AI and set in motion concrete steps toward more just and effective hiring.
