Alva’s assessment suite helps hiring teams evaluate candidates’ suitability for job positions and automates workflows in the hiring process. We leverage AI to provide you (a recruiter or a hiring team member) with relevant recommendations for your hiring processes and to help you more efficiently assess candidates applying for technical roles.
AI in Alva’s Architect
Recommending Test Profiles & Coding Assessments
When you create a job position in the Alva platform, we use keywords from the job title to match the role with the right test profile and to recommend suitable coding assessments for technical roles (a test profile helps you decide which personality traits and level of logical ability are ideal for the role you’re hiring for). Here’s an overview of the process:
A job position is created, for example, “Salesman at ABC store”
We use a Large Language Model (an LLM) to remove words from the title that are not relevant for the role (for example, removing “at ABC store” from the title above).
The clean job title is then shared with a title taxonomy system, which categorizes the title of the role into a job family using its classification system (e.g. a database for job titles).
The categorized role is mapped to a Test/Personality profile to produce a recommendation.
For Technical Roles, we follow the first four steps and then use our job title database to match categories for roles (for example, “front-end” or “back-end”) with specific sets of skills. Technical roles then follow steps 5 and 6.
We ask you for the years of experience required for the technical role.
We then use an LLM to select a coding challenge from our library based on the required skills and years of experience for the role.
While we utilize AI tools in recommending assessments and test profiles, their use is limited to automating processes that have already been completed by humans. The team at Alva has previously mapped out which test profiles and assessments are ideal for specific positions, and our application of external AI tools is only designed to repeat this mapping process. No personal data is shared with external AI tools in these steps.
Recommending Interview Questions
The second component of Alva’s Architect helps you generate interview questions based on a candidate’s personality test results.
The feature works using the following process:
The candidate’s fit to the Test/Personality profile is collected. We identify the most important personality trait for the selected profile and the candidate’s lowest fitting personality trait for the role.
The traits and the corresponding fit to the profile are fed to an LLM together with instructions on proper interviewing practice and context about the personality model. No personal data is shared in this step as we only share the raw test scores with the LLM.
An interview criterion is generated, including:
Questions
Scoring guidelines
The user can re-generate the criterion in case they feel the result is not satisfactory.
AI in the Auto-Scoring of Coding Tests
Alva’s coding assessments are designed to gauge the readiness of candidates applying for your technical roles. Our auto-scoring feature helps you score candidates’ answers to our coding challenges and is particularly helpful for recruiters coming from non-technical backgrounds.
Here’s how our auto-scoring feature works:
A candidate submits their responses to our coding challenges (a bunch of code).
Code Review: an LLM reviews the code submitted by the candidate based on Alva’s coding review scorecard and the specific coding challenge. These scorecards are customizable, and you can always edit the criteria used to score submissions.
If the code does not meet the listed criteria, it receives a score of 0.
If the code partially meets the listed criteria, it receives a score of 1.
If the code fully meets the listed criteria, it receives a score of 2.
Code Quality: assesses the code quality of the candidates’ responses and scores this using criteria provided by Alva. This criteria is not possible for you to edit.
If the code does not meet the listed criteria, it receives a score of 0.
If the code partially meets the listed criteria, it receives a score of 1.
If the code fully meets the listed criteria, it receives a score of 2.
An overall coding score is calculated by weighing the candidates’ scores from the AI code review, AI code quality, and other automatic tests done, yielding a percentage score.
It is important to note that it is 100% optional to use these AI features, and we do not share personal data with external AI tools in any of the steps described above.