Skip to main content
All CollectionsHiring Managers
How to effectively track and optimize your test to improve quality of hire
How to effectively track and optimize your test to improve quality of hire

A guide to optimize your online testing process. Track critical metrics to ensure a high quality selection process

The WeCP Team avatar
Written by The WeCP Team
Updated over a week ago

Online tests are a crucial step in the hiring process, serving as an initial filter to qualify candidates for face-to-face interviews. To ensure these tests are effective, it's essential to track specific metrics and adhere to best practices. This article provides a detailed guide on the key metrics to monitor and the best practices to follow, along with a template for organizing your data.

Key Metrics to Track

  1. Completion Rate

    • Definition: The percentage of candidates who complete the test out of those who started it.

    • Formula: (Number of Candidates Who Completed the Test / Number of Candidates Who Started the Test) * 100

    • Importance: Indicates engagement and helps identify potential issues with the test's length or difficulty.

  2. Pass Rate

    • Definition: The percentage of candidates who pass the test out of those who completed it.

    • Formula: (Number of Candidates Who Passed the Test / Number of Candidates Who Completed the Test) * 100

    • Importance: Measures the overall difficulty of the test and helps in setting the right benchmark for qualifying.

  3. Average Time to Complete

    • Definition: The average time taken by candidates to complete the test.

    • Formula: Total Time Taken by All Candidates / Number of Candidates Who Completed the Test

    • Importance: Ensures the test is not too long and maintains candidates’ interest and focus.

  4. Score Distribution

    • Definition: The range and distribution of scores achieved by candidates.

    • Importance: Helps in understanding the overall performance and effectiveness of the test questions in differentiating candidates.

  5. Item Analysis

    • Definition: Performance analysis of individual test questions.

    • Importance: Identifies questions that are too easy, too difficult, or potentially ambiguous, ensuring test quality and fairness.

  6. Drop-off Rate

    • Definition: The percentage of candidates who start but do not complete the test.

    • Formula: (Number of Candidates Who Did Not Complete the Test / Number of Candidates Who Started the Test) * 100

    • Importance: Highlights potential issues in the test structure or user experience that need to be addressed.

  7. Candidate Feedback

    • Definition: Feedback from candidates regarding the test experience.

    • Importance: Provides qualitative insights to improve test content, format, and user experience.

  8. Technical Issues Reported

    • Definition: The number and type of technical problems encountered by candidates.

    • Importance: Ensures a smooth testing process and identifies areas for technical improvements.

  9. Engagement Metrics

    • Definition: Metrics such as the number of clicks, time spent per question, and navigation patterns.

    • Importance: Offers insights into candidate behavior and engagement with the test.

  10. Correlation with Interview Performance

    • Definition: The relationship between online test scores and face-to-face interview performance.

    • Importance: Validates the predictive accuracy of the online test in identifying suitable candidates.


Test Metrics: Good vs. Bad Metrics

Metric

Good Metric

Bad Metric

Completion Rate

> 80%

< 60%

Pass Rate

50% - 70% (dependent on difficulty)

< 30% or > 90%

Average Time to Complete

Within 10% of expected time

> 20% longer or shorter than expected

Highest Score

Close to 100%

< 90%

Lowest Score

Above 30%

< 10%

Mean Score

60% - 70%

< 40% or > 80%

Median Score

60% - 70%

< 40% or > 80%

Drop-off Rate

< 20%

> 40%

Clicks Per Question

Consistent with expected interaction

Significantly higher or lower

Time Spent Per Question

Consistent with question difficulty

Significantly higher or lower

Technical Issues Reported

< 5% of candidates reporting issues

> 15% of candidates reporting issues

Correlation with Interview Performance

r > 0.6

r < 0.3

Detailed Breakdown

  1. Completion Rate

    • Good: > 80%

      • Indicates high engagement and manageable test length/difficulty.

    • Bad: < 60%

      • Suggests issues with test engagement, length, or difficulty.

  2. Pass Rate

    • Good: 50% - 70%

      • Balanced difficulty, suitable for filtering candidates.

    • Bad: < 30% or > 90%

      • Too difficult or too easy, respectively.

  3. Average Time to Complete

    • Good: Within 10% of expected time

      • Shows that the test length is appropriate.

    • Bad: > 20% longer or shorter than expected

      • Indicates issues with test pacing or question clarity.

  4. Highest Score

    • Good: Close to 100%

      • Top performers are fully demonstrating their abilities.

    • Bad: < 90%

      • Indicates the test might be too hard or there are ambiguous questions.

  5. Lowest Score

    • Good: Above 30%

      • Even lower performers are able to answer some questions correctly.

    • Bad: < 10%

      • Test may be too difficult or poorly designed.

  6. Mean Score

    • Good: 60% - 70%

      • Indicates a balanced test with a good spread of scores.

    • Bad: < 40% or > 80%

      • Suggests the test is either too hard or too easy.

  7. Median Score

    • Good: 60% - 70%

      • Consistent with a balanced difficulty level.

    • Bad: < 40% or > 80%

      • Indicates skewed difficulty.

  8. Drop-off Rate

    • Good: < 20%

      • Most candidates are completing the test.

    • Bad: > 40%

      • High drop-off suggests issues with test design or engagement.

  9. Clicks Per Question

    • Good: Consistent with expected interaction

      • Indicates questions are clear and straightforward.

    • Bad: Significantly higher or lower

      • May indicate confusing questions or overly simple ones.

  10. Time Spent Per Question

    • Good: Consistent with question difficulty

      • Shows candidates are spending appropriate time per question.

    • Bad: Significantly higher or lower

      • Indicates potential issues with question clarity or difficulty.

  11. Technical Issues Reported

    • Good: < 5% of candidates reporting issues

      • Indicates a stable testing platform.

    • Bad: > 15% of candidates reporting issues

      • Suggests significant technical problems.

  12. Correlation with Interview Performance

    • Good: r > 0.6

      • Strong correlation, indicating the test is a good predictor of interview performance.

    • Bad: r < 0.3

      • Weak correlation, suggesting the test is not effectively predicting interview success.

By monitoring these metrics and aiming for the "good" benchmarks, you can ensure that your online tests are effective tools for qualifying candidates for face-to-face interviews.

Did this answer your question?