How to overcome recorded interviews in which an algorithm analyzes responses, bodily reactions and behavioral patterns
Convincing a robot that we are the perfect candidate for a job is a challenge we have to start getting used to. The use of artificial intelligence (AI) is becoming widespread in all those areas that can be digitized within a company, such as personnel selection, a fairly natural process, since machine learning is extremely useful when applied in processes that have repetitive and very similar tasks.
It has been years since large companies such as L’Oréal, Unilever, Telepizza, Vodafone or McDonald’s started using this tool to fine-tune their personnel selection, but the proof that it has stopped being a trend to become a reality is that companies specialized in AI-driven job interviews reported an increase in services during the pandemic.
Its main utility is that it can be applied throughout the selection cycle. That is, it allows contacting candidates; filtering resumes; conducting pre-interviews via video with predefined questions in which answers, body reactions and behavioral patterns are analyzed; and producing final reports. This makes selection processes faster, by automating review tasks; fairer, since they compare all candidates in the same way and reduce human bias and discrimination by gender, ethnicity, culture or age; reduce the cultural mismatch between the organization and new employees; and detect behavioral patterns that may lead to future problems, whether performance, abandonment or collaborative work.
How does AI evaluate a job interview?
To begin with, the algorithm analyzes the predefined questions and identifies key aspects of the answers. In other words, it transforms the audio into text, analyzes it, classifies it, extracts content and assigns a rating. It takes into account what we say, how we say it and what our bodily response is like.
In addition, it scans facial gestures to identify emotions and reactions, and some questions are linked to assess the candidate’s personality.
Finally, since other competencies must be measured, the selection process can be combined with other mechanisms, which may include gamification, tests or exercises, allowing these competencies (e.g. critical, analytical or computational thinking) to be assessed and associated with the results of the interview.
How to pass an interview with a robot
The result of applying for a job in a company that relies on artificial intelligence to choose its candidates is that, in order to be selected, we will have to take into account that we are not facing the old job interviews, in which we were evaluated face to face by a person through a more or less open chat. If we want to get through an interview in a videoconference with an AI, there are a series of recommendations that can help us.
– Environment: it is a good idea to have a good camera and the right luminosity. It is also important to pay attention to the right framing, so that we are correctly positioned in front of the camera.
– Preparation: some interviews include a demo to prepare them. It is recommended to use it and spend as much time as necessary at this point before conducting the interview.
– Content and form: it is advisable to control the diction and content of the answers. These systems are generic and make mistakes if the words are not pronounced correctly. In addition, the use of negative words affects the interview process.
– Emotional control: although it is normal to feel some tension when facing these processes, the ideal is to be able to control emotions so as not to associate negative perceptions to the system.
However, even if we follow these tips to the letter, we may not get the job even if we are the ideal candidates, and the reason is that these systems are not infallible. One of the problems they can cause is that we may not even make it to the interview: by creating restrictions in the selection process through the job description, potential employees who do not fit the norm may be left out. Another reality is that, having been trained with data, the systems may reproduce the biases of those who trained them in the first place.