At WeCP (We Create Problems), our goal is to provide fair, reliable, and inclusive skill assessments and interview solutions. Bias in assessments can have unintended consequences, from excluding top talent to promoting unfair hiring practices. To ensure unbiased evaluations, WeCP employs advanced bias mitigation techniques. This article explores these methods with real-world examples illustrating their impact on creating a more equitable hiring process.
1. Optimized Pre-processing
Optimized pre-processing involves modifying training data features and labels to reduce bias before training AI models.
Examples:
Candidate Scoring for Multilingual Jobs
When assessing language skills, WeCP ensures that training data includes diverse language dialects and regional variations. For instance, assessments for English-speaking roles include Indian English, British English, and American English variations to avoid disadvantaging non-native speakers.Technical Test Balancing
For coding tests, WeCP optimizes training data by balancing datasets for candidates from different educational backgrounds, ensuring candidates from less prominent universities aren't penalized due to underrepresentation in training data.Cultural Neutrality in Scenarios
Situational judgment tests are pre-processed to remove culturally specific scenarios (e.g., questions referencing sports like baseball or cricket) that might disadvantage candidates unfamiliar with such contexts.
2. Reweighing
Reweighing adjusts the weights of different training examples to ensure that underrepresented groups are treated fairly in the AI's predictions.
Examples:
Demographic Balancing in Developer Assessments
WeCP adjusts weights in datasets to ensure fair representation of female developers in traditionally male-dominated fields like backend engineering.Fairness in Sales Assessments
In assessments for sales roles, responses from underrepresented age groups are given slightly higher weights to balance datasets that skew towards younger professionals.Geographic Representation in Remote Jobs
For remote roles, WeCP ensures fair evaluation by reweighing data for candidates from regions with historically lower internet connectivity to account for potential technical challenges during tests.
3. Adversarial Debiasing
Adversarial debiasing uses adversarial techniques to train AI models to reduce bias while maintaining accuracy.
Examples:
Removing Name Bias in NLP Models
When evaluating open-ended text answers, WeCP's AI ensures that names suggesting specific cultural or ethnic backgrounds don’t influence scores.Role-specific Skills Emphasis
In project management assessments, adversarial debiasing ensures that soft skills like communication and leadership are given equal weight regardless of demographic attributes.Neutral Processing of Candidate Photos For image-based proctoring during tests, WeCP’s AI is adversarially trained to:
Detect if a candidate is present in the camera frame without considering their gender or appearance.
Ignore differences in facial structure or skin tone that could lead to inaccurate detection rates for specific demographics.
For instance, the AI accurately detects both male and female candidates equally during proctoring sessions, avoiding errors caused by biases in facial recognition technology.
4. Reject Option Classification
Reject option classification modifies borderline predictions to ensure fairness in candidate evaluations.
Examples:
Fair Threshold Adjustments
Candidates whose scores hover around a cut-off (e.g., 70%) are re-evaluated for fairness, ensuring factors like test anxiety don’t disproportionately penalize them.candidate evaluations.
ℹ️ Note: This mitigation technique is still under beta and we've not fully realized this on production
AI Confidence Scoring
If the AI model's confidence in a prediction is low (e.g., 50%), the system adjusts the decision-making process, enabling human reviewers to ensure fairness.Inclusive Decision-making
WeCP applies this technique in group interview evaluations, ensuring no candidate is excluded due to marginal score differences.
5. Disparate Impact Remover
This technique edits feature values in training data to minimize bias while maintaining data utility.
Examples:
Adjusting for Past Biases in Historical Data
When training AI on hiring data, WeCP removes historical biases such as overrepresentation of candidates from elite universities to ensure fair assessments.Compensating for Regional Internet Access
WeCP adjusts for time penalties in coding challenges for candidates from regions with slower internet speeds.Fairness in Behavioral Tests
Behavioral assessments that may favor extroverted personalities are adjusted to account for diverse working styles, including introverted candidates.
6. Learning Fair Representations
WeCP uses AI to learn fair representations by obfuscating sensitive information (e.g., gender, race) that could introduce bias.
Examples:
Removing Visual Cues in Video Interviews
Video proctoring AI analyzes only a candidate's responses and performance, ignoring appearance-based features such as clothing, hairstyle, or perceived age.Facial Analysis Without Gender Bias
When candidates are required to record videos as part of the assessment, WeCP ensures that the AI:Does not infer gender, age, or ethnicity from the candidate’s facial features.
Focuses solely on relevant performance metrics, such as eye contact, tone, and clarity of communication.
For instance, if two candidates (one male and one female) submit video responses for a customer support role, WeCP’s AI evaluates them based on the content of their answers and communication style, ignoring facial features or perceived gender.
Lighting and Visual Quality Adjustments
To prevent biases caused by differences in video quality (e.g., poorer lighting conditions for candidates in certain regions), WeCP applies preprocessing techniques to normalize brightness, sharpness, and contrast before analyzing videos.Focus on Behavior Over Appearance
During video assessments, the AI assesses behaviors such as responsiveness to questions and problem-solving articulation, while ignoring physical attributes, such as clothing, hairstyle, or makeup, which could lead to unconscious gender or cultural biases.Equal Opportunity in Coding Assessments
For coding challenges, metadata about candidates (e.g., university, location) is hidden from evaluators to focus purely on technical performance.Scenario-based Assessments
Customer service roleplays remove indicators of regional accents to ensure candidates are scored on communication skills, not accent familiarity.
7. Prejudice Remover
Prejudice remover incorporates a discrimination-aware regularization term into the learning objective of the AI model.
Examples:
Balanced Team-building Assessments
In team collaboration scenarios, WeCP trains AI to recognize group dynamics objectively, ignoring implicit biases about leadership based on gender or age.Equal Representation in Leadership Roles
For management roles, AI models are trained to value performance metrics equally across genders, countering societal biases that might otherwise favor male candidates.Soft Skills Analysis
The AI ensures candidates’ communication scores are not influenced by stereotypes about their ability based on demographics.
8. Calibrated Equalized Odds Post-processing
Calibrated equalized odds post-processing modifies classifier predictions to ensure that outcomes for different demographic groups meet equalized odds criteria, balancing true positive and false positive rates.
Examples:
Balancing Technical Skill Outcomes
In coding assessments, WeCP adjusts predictions to ensure female developers, often underrepresented in tech datasets, are not unfairly penalized by AI predictions.Soft Skills Equalization
When evaluating video-based soft skills (like communication or empathy), AI ensures candidates from non-native English-speaking countries are not disadvantaged by subtle biases in pronunciation or phrasing.Scoring for Management Roles
Leadership evaluation tools are calibrated to ensure similar acceptance rates for male and female candidates with equivalent performance, reducing gender-related discrepancies.
9. Equalized Odds Post-processing
This technique modifies predicted labels to ensure fairness by meeting the equalized odds criterion across protected demographic groups.
Examples:
Interview Score Adjustments
After interviews, AI-generated scores are adjusted to ensure fairness for candidates across age groups, particularly when evaluating traits like confidence or assertiveness.Technical Problem-solving Scenarios
WeCP ensures predictions for problem-solving ability are not skewed by gender, ensuring equal opportunity for men and women to score highly.Reducing Educational Bias
The post-processing step ensures predictions for candidates from smaller or less-renowned universities are not unduly penalized when compared to those from elite institutions.Ensuring Gender-neutral Face Verification In cases where face verification is required (e.g., to prevent impersonation during assessments), WeCP’s post-processing algorithms ensure:
Equal success rates for identity verification across genders and ethnicities.
Adjustment of false positive rates if the system initially shows better performance for one demographic (e.g., males) over another (e.g., females).
For example, if male candidates were verified more accurately due to an imbalance in training data, post-processing adjusts the verification outcomes to achieve parity.
10. Meta Fair Classifier
Meta fair classifiers take fairness metrics as part of their input and optimize the classifier to uphold these metrics.
Examples:
Custom Fairness Metrics for Clients
For companies with specific diversity goals, WeCP creates custom fairness metrics to ensure the assessments align with their objectives, such as increasing representation of women in STEM roles.Cross-industry Skill Benchmarking
Meta fair classifiers are optimized for industries where gender gaps are prevalent, such as ensuring equal scoring opportunities in engineering and finance roles.Inclusive Multi-role Assessments
For multi-role hiring campaigns, WeCP ensures fairness across all roles by tailoring fairness metrics to suit the requirements of each position while maintaining high accuracy.AI-Based Communication Analysis
For candidates recording video responses, WeCP’s meta fair classifiers analyze:Verbal and non-verbal communication, focusing only on relevant factors like clarity, confidence, and logical reasoning.
Ignoring biases linked to gender-based speaking patterns (e.g., softer tones for female candidates or regional accents for male candidates).
By optimizing for fairness metrics, the classifier ensures that candidates are scored equitably, regardless of gender or accent.
Bias mitigation is not a one-time effort; it requires continuous monitoring and updates. At WeCP, we follow a structured process for improving our AI-driven assessments:
1. Data Audits : Regular audits are conducted on training and test data to detect any emerging biases. For example:
Identifying changes in demographic trends in the candidate pool.
Checking for bias in new datasets introduced for feature updates.
2. User Feedback Loops : We actively collect feedback from customers and candidates to identify areas of improvement. For instance:
Candidates may report questions that feel culturally or contextually irrelevant.
Employers may identify trends suggesting biases in outcomes for certain roles.
3. Third-party Reviews : To maintain transparency, WeCP works with external consultants and researchers to review and validate its bias mitigation strategies.
Our aim is to ensure:
Equitable Hiring Decisions remains at the forefront: Candidates are evaluated purely on their skills and potential, not on irrelevant or biased factors.
Talent Pools are diverse: Reducing bias widens the talent pool and fosters diversity, which is proven to drive better organizational performance.
There is Credibility and Trust in Organization: Fair assessments build trust among candidates and employers, reinforcing WeCP’s reputation as a reliable partner.
For WeCP Mitigating bias is not just a technical challenge but a moral and business imperative. We leverage cutting-edge techniques to be at the top of bias mitigation.