WeCP uses Adversarial Debiasing to mitigate this bias. Replacing a leaked MCQ question while preventing those who knew the leaked answer from easily getting the new one. Key constraints are maintaining difficulty and level, and avoiding obvious similarity. The bias here isn't about demographic unfairness. It's about information bias. A group has an unfair advantage (knowledge of the leaked question) that could transfer to the new question if it's too similar. Our goal largely remains to mitigate this information asymmetry. We use Adversarial Debiasing because this technique aims to make representations indistinguishable with respect to a sensitive attribute. While the "sensitive attribute" isn't demographic here, the knowledge of the leaked question can be considered a differentiating factor.
We then focus on the "Not Obvious" MCQ. This is crucial. The new question can't be a simple paraphrase or a variation on the same core concept if you want to prevent those with leaked knowledge from having an edge. The idea of "adversarial" training comes into play. You want to train a model to generate new questions that are difficult to distinguish from the original question pool in terms of difficulty and level, but are different enough conceptually to not be easily solved by someone who knows the leaked question. The "adversary" in this case could be a model trying to predict if a new question was generated to replace a leaked one.
The AI model generates candidate replacement questions. A separate AI model tries to identify if the question is a "replacement" (or if it shares characteristics with the leaked question). The generator is trained to fool the discriminator, creating questions that are high-quality (meet the difficulty/level criteria) but are conceptually distinct.