Higher Standards for Hire: Algorithmic Bias in the Job Application Process
This article is about algorithm bias in the hiring process and its implications for diversity. I present the positives of using AI in the hiring process, evidence of algorithm bias, and evidence current evidence of hiring bias.
People with great ideas can be found in every corner of the world. This growing trend of diversity can be seen everywhere: in the workplace, in schools, and in life. The introduction of “diversity” in these places has many benefits, including wider selection for the perfect “fit” and different ways of thinking about the same problem, and helps to make a stronger community. A person of a diverse background —racially, gender, or socioeconomically— can have a lot to offer. In many cases, their non-traditional background may have led them to be resourceful, creative, and persevering in order to get to the place where they are now. These types of qualities are the exact thing an employer looks for in its employees.
With the advancement of technology, many employers are turning to AI to pick the best candidates for them. For example, for every 1,000 applications received, AI programs may analyze and pick the “top” 5 so that the recruiter can manually choose from the smaller pool; overall, the whole process may only take a few days. On the contrary, this same task may take upwards of days or weeks using only human resources. Employers hope to use AI in order to shorten the “hiring window,” or the time that the employer is searching to fill a vacant position. This hiring time, they claim, is a “drain on organizational productivity.” Additionally, AI has the potential to forecast how successful a person might be at the employer’s company better than any human could through the use of complex algorithms that analyze the person’s qualifications and experience. The advantage of easier job application submission, using technology and suggestions made by AI on places to apply based on your resume/application, will lead to many advantages for both the employer and employee candidates. Some websites that are heading in this direction are LinkedIn and Handshake for students. Even video interviews, where AI scans micro-expressions for honesty and overall fit, are already being implemented.
One of the greatest misconceptions of AI being implemented into decision-making is that it will be “unbiased” when it comes to choosing someone for a loan, sentencing them to a punishment, or perhaps even making a hiring decision. It would be ideal if the color of someone’s skin, gender, or age did not come into account when determining whether they are chosen for an important decision. Despite popular opinion, it is well known within the research community that machines can in fact be biased. Experts in algorithm bias such as Buolamwini from MIT and Gebru from Stanford, Cowgill from Columbia, Kiritchenko with an interest in fairness from the national research council in Canada, and others find that the reason for this bias is because of the data that the algorithm trains on. There is still some ambiguity between current hiring bias and AI algorithms. Hiring has been previously biased by humans, and the data obtained from these practices could potentially be used to train AI.
The way that a machine learning algorithm is programmed and trained consists of feeding data to the algorithm. Based on this data, the algorithm will start to recognize trends and patterns. Essentially, the more an element is repeated, the more likely an algorithm will be able to recognize another element similar to it in the future. Once these patterns are recognized, the hope is that the algorithm will be able to replicate or recognize data like the set on it which was trained. In the case of hiring, this means that if a company has been hiring in a biased manner for the past 50 years and this set is the only data they have, then a hiring algorithm will be more likely to pick someone who resembles the company’s previous employees, thus perputating racism and discrimination.
In this study, Buolamwini and Gebru assess facial analysis algorithms and datasets, in which they attempt to prove bias towards a specific group of people, darker-skinned females. They create their own facial dataset that is both gender-balanced (about equal ratio of men to women) and skin-tone-balanced (equal distribution of light-skinned and dark-skinned subjects) to create a more equal and realistic representation of humans. After this, they ran their data with three face-recognizing algorithms: Microsoft, IBM, and Face++. Unsurprisingly, they found that the algorithms were able to recognize light-skinned men with a higher degree of accuracy than dark-skinned women (see Figure 1 to the left from Buolamwini and Gebru’s study).
Bias in hiring practices is an issue that has persisted since the 1980s, as seen in Quillian’s Meta-Analysis, which shows the contrast in hiring-related outcomes between equally qualified candidates from different racial and ethnic groups. The analysis found that racial bias has stayed the same for the past 30 years for African Americans. There are a plethora of categories that are subject to bias, including gender and age. It is important to know what types of hiring bias still exists and what this could mean for the future of AI in hiring. Incorporating AI in the hiring process will only perpetuate bias and racism because of existing biased datasets, ultimately hindering diversity, and by extension, progress.
Such biased data could potentially be fed to machine learning algorithms; numbers and statistics similar to these are sure to be found in many companies’ employee records, in addition to many other categories. If one of these companies bought or used an algorithm tailored to them, then one can see the serious implications behind them. Many pre-existing laws for equal employment are often antiquated and do not account for algorithmic hiring biases, and those that do are accompanied by a long and tedious legal battle. Because of this, it is apparent that hiring bias and discrimination can continue in more innocuous ways.
A computer scientist should be wary about bias in the data by looking at the type of people a certain company has hired. For example, according to Wired, 88% of Microsoft management is male and 81% is white. If an algorithm were to use this data for training, this could hide the thousands of other qualified employees that won’t be noticed because of bias. In this instance, bias would take the appearance of data points that represent white men. This is applicable to many other areas beyond the tech world, such as law, medicine, engineering, and other industries that receive many applications but have a history of low employment for minorities and women.
The implementation of AI algorithms into the hiring process is not yet ready. Human bias continues to be an issue despite laws that explicitly outlaw this behavior. All researchers seem to be in agreement that the training set is the biggest cause for the bias in the system. Because biased data is the only data that is available for training, it is foolish to expect these algorithms to be unbiased. Some researchers suggest preventing bias through implementing a manual affirmative action or even completely changing the data set from previous data to new artificially-created data. Despite this, more research is needed on the mitigation of algorithm bias. Although bias in algorithms is still pervasive, they are still a step forward in the right direction, and with the perfection of this technology, decision-making will be fair and efficient, creating a “win-win” scenario for both the employee and employer.