Why test AI on our assessments?
At Alva Labs, we take test security seriously. Some candidates and customers wonder whether AI tools like ChatGPT can be used to gain an unfair advantage.
Here’s what we’ve found:
In theory, someone could try to copy test items into an AI tool. To prevent this, we use measures like screenshot blurring and detection. In addition, every candidate agrees to our Fair Play Policy, which prohibits using AI tools, getting outside help, or sharing test content. These measures work together to create a fair and comparable test session for all candidates.
Still, we wanted to know how well AI would actually perform if given the chance.
What did we test?
We ran the Logical Ability Test through leading AI models, letting the agents take the test on their own with different prompts designed to mimic realistic candidate behavior. Their results were then compared to the average human test-taker.
Results
Result Overview:
AI model | Average score (across 50 test completions) |
Expected mean for general working population | 5.5 |
OpenAI GPT-5 Chat | 2.66 |
Claude 4.1 Opus | 2.8 |
Key takeaways:
AI struggles with the reasoning steps needed for solving the test
AI models is not likely to achieve high scores on the test.
Their results fall in the average or below-average human range.
What does this mean for candidates and customers?
The risk of AI-assisted cheating is low, thanks to proctoring measures and the Fair Play Policy.
People with genuine logical ability continue to outperform AI tools.
We will keep monitoring advances in AI and re-testing our assessments to maintain fairness and security.