Skip to main content
All CollectionsCoding testsManaging coding tests
Understanding the coding test auto-score
Understanding the coding test auto-score

Let's take a closer look at the auto score on Alva's coding tests and how to interpret the results of them

Ludvig Wettlen avatar
Written by Ludvig Wettlen
Updated over a week ago

AI scoring:

When a candidate has submitted their coding test, it can be scored in three different ways:

  1. Automatic tests

  2. Code quality by AI

  3. Code review by AI

Automatic tests

The automatic tests can be used to test whether an application actually works the way it is intended to. A simple example would be that we want candidates to build a form to submit a name on a website. A simple test to verify that the form works as expected could be to check whether the website actually receives a name in the correct format

How do the automated tests work in Alva?

Each test is given either a pass or fail. Based on the number of tests that passed, we can calculate a percentage completion score. 4 out of 5 tests would be 80%, and 5 out of 5 would be 100%.

What is a good score on the automated tests?

When the tests fail, it is an indication that the submission did not meet the key requirements that are outlined in the task instructions. However, keep in mind that a good score will always depend on what test you send out to which candidates!

Sending a junior-level challenge to senior candidates can still be difficult if they are given a strict time limit to work on the test. In such a setting, a good score can be lower than 100%.

AI evaluations

Apart from automatic tests, candidate submissions also undergo an evaluation by AI. This evaluation looks at specific criteria and rates the candidate submission on a scale from 1 to 3. 1 indicating that the submission did not meet the criteria requirements, 3 indicating that the submission fully satisfies the requirements.

AI evaluations are a great starting point for understanding a candidate’s performance, but cannot completely replace a human reviewer. Each criteria rated by AI will have specific examples and comments listed under View Detailed Results.

Code quality by AI

A candidate’s submission will also get a Code quality score evaluated by AI. Having high code quality makes the software easy to maintain and reduces risks of bugs occurring. Code quality is evaluated on eleven specific criteria. Ranging from how the candidate applies naming conventions for variables and functions, the code structure, adhering to language-specific style guides, modularity, and so on.

Code review by AI

A candidate’s submission can also undergo a code review by AI. A code review is done to ensure that the submission meets project-specific requirements. These requirements can also be customized to suit your needs by editing the associated scorecard belonging to a coding challenge.

Did this answer your question?