Skip to main content

Measuring AI Fluency

Learn in-depth how WeCP helps you test AI fluency of your existing and future employees

Abhishek Kaushik avatar
Written by Abhishek Kaushik
Updated over 2 weeks ago

Artificial Intelligence has moved from being a niche capability to a universal skill that touches nearly every role in today’s workforce. From marketers writing campaign copy with generative AI, to engineers debugging with AI assistants, to product teams brainstorming with AI whiteboards—the ability to work fluently with AI is no longer optional. It is becoming as fundamental as computer literacy was two decades ago.

Yet, while most organizations acknowledge the importance of AI, very few have a reliable way to measure AI fluency among candidates or employees. Resumes and interviews rarely capture it, and asking someone “Do you know how to use AI?” is too superficial. To harness AI effectively, organizations need a way to evaluate how well individuals can communicate with, guide, and collaborate with AI systems.

That’s where WeCP’s new feature steps in.

Introducing WeCP’s “AI Fluency” Feature

WeCP, long known for skill assessments that go beyond traditional multiple-choice tests, has launched AI Fluency - a first-of-its-kind feature that enables companies to measure a candidate’s general proficiency in working with AI.

The goal isn’t to test whether someone knows the latest AI jargon. Instead, it evaluates how well individuals can translate their intent into actionable instructions that AI systems can understand. This skill - prompting, structuring, clarifying, and iterating - is the foundation of true AI fluency.

How It Works: Prompting for Abstract Shapes

The assessment uses an elegant but challenging exercise: prompting AI to create abstract images, such as geometric black-and-white shapes.

Here’s why this works:

  • Articulation Depth: Candidates must describe what they see and want in a way that an AI image generator (like DALL·E or Midjourney) can recreate. Vague prompts don’t cut it.

  • Clarity of Thought: Because the shapes are abstract, there is no cultural or subject-matter bias. Success depends purely on how clearly the candidate structures their instructions.

  • Creativity and Precision: Candidates must balance imaginative thinking with precision - two qualities critical when guiding AI systems in real-world tasks.

For example, given the triangular abstract image you see above, a strong prompt might be:

“A black-and-white abstract image of a large equilateral triangle with thick black borders, containing two smaller nested triangles of the same style, aligned concentrically, on a white background.”

This isn’t just about describing geometry. It’s about demonstrating the ability to think in AI-compatible language.

Why This Matters for Companies

By introducing AI Fluency assessments, organizations can:

  • Hire Future-Ready Talent: Ensure candidates aren’t just experts in their domain but can also harness AI as a multiplier.

  • Benchmark Across Teams: Compare AI fluency across departments to identify training needs and strengths.

  • Reduce Risk of Superficial Skills: Avoid mistaking AI familiarity for real fluency. Knowing how to chat with ChatGPT casually is different from being able to systematically guide it to produce consistent, reliable outputs.

The Bigger Picture

WeCP believes that the future of work will be shaped by those who can effectively collaborate with AI. Measuring AI fluency today gives companies an early edge in building teams that thrive in tomorrow’s environment.

By transforming abstract shape prompting into a measurable skill test, WeCP is not only making AI literacy testable - it’s making it practical, scalable, and relevant for hiring and workforce development.

Did this answer your question?