Artificial Intelligence is becoming more commonplace in everyday life, with the emergence of Chat-GPT and Microsoft Sydney, people are becoming more aware of how powerful and integrated in society these tools can be. What if these tools are heavily biased? Wouldn’t this affect a large number of people in serious ways? We are beginning to see inequity happening in the recruitment process. AI recruitment is the process of using programs to screen candidates and resumés without having to rely heavily on recruiters at companies. It is supposed to be a more efficient method. Currently, there isn’t enough parity in the recruitment process. In this article, I explored the advantages and disadvantages of using AI in recruitment.
AI recruitment became very popular in 2018 when companies became overwhelmed with job applications. According to a Deloitte study, there was a significant increase of 39 percent in the average number of job applications received per position from 2012 to 2022 (Deloitte, 2019). Typical human resources resumé filtering is an inefficient process and the usage of AI will allow companies to review entire resumés instead of only certain portions of them. AI can help companies retain stronger and more qualified workforces because it uses models to perform the required analysis. By implementing tools such as sentiment analysis and natural language processing, it can almost perfectly understand the personality of the applicant and perform the job that a recruiter traditionally does. From there, the recruiter can make a decision to hire or not to hire.
Although AI can handle a lot of data, a major problem with it is its bias toward people of color or other underrepresented groups. In the past, facial recognition software used by the Police Department was inaccurate, putting Black Americans at risk for false accusations in regard to crimes (Khaleda, 2023). The facial recognition software used was an attempt to automate matching criminals’ photos seen in crime footage to a picture matching the database. Thus cutting out the need for police to scour a database in search of a matching photo. A recent economic study conducted in 2021 revealed that applications featuring names typically associated with the Black community, such as Antwan, Darnell, Kenya, and Tamika, had a lower likelihood of receiving a response compared to applications containing names typically associated with white Americans, such as Brad, Joshua, Erin, and Rebecca (Peyush, 2022). This inequality is because past hires consisted mainly of only “white” names and caused the AI model to be biased against the names of black Americans. This is a significant issue because it intertwines the previous systemic hiring biases in the new models making it difficult for racism to be erased in the future of recruitment processes.
AI has the capability to make the recruiting process more effective, however, it still needs to be fine-tuned. For example, Amazon created an AI tool to screen 100 resumés and print out the top five. The models were trained based on past hires, but because the past hires were mainly men, this caused Amazon’s model to be biased against women (Dastin, 2018). It is difficult to find strong empirical data on how bad or how well these models were performing. Many experts have suggested that these models do not perform well because they pick up on the gendered language in past resumés. In spite of this, there are some people who believe otherwise. Despite this, a Deloitte survey found that more than half of survey participants trust AI in the application process. (Deloitte, 2020). Those concerned about bias decided AI could overcome this bias by claiming that the models need to have accurate training data and ensure that the team building these models is a diverse team. Professionals have called upon using these models in an “anonymous mode” (Drage & Mackereth, 2022). Using these models, all demographic information and identifiable information is removed from an applicant’s resumé and the employees are judged strictly on a meritocracy system. Essentially, this kind of model “looks at what the candidates are saying instead of how they are saying it” (Drage & Mackereth, 2022).
As AI continues to be implemented in the workplace, it is crucial that recruiters monitor its bias to ensure more diverse workplaces while staying up to date on technological advancements.
To conclude, AI in recruitment is not something that is going away, however, it should be monitored. “Predictions say that AI for the recruitment industry is expected to grow to USD 890.51 million.” (Schwartz et al., 2019). Companies should use these models to screen out apparent people they wouldn’t hire, but they should still keep a human in the recruiting process and provide feedback to the model if they think it is wrong so that the model can adjust. There is often a disconnect between what DEI employees want versus what the computer scientists generate in the model creation. As AI continues to be implemented in the workplace, it is crucial that recruiters monitor its bias to ensure more diverse workplaces while staying up to date on technological advancements.