In 2023, 50% of college and university admissions offices reported using AI for reviewing applications; in 2024, reports suggest that number rose to 80%.[1] These tools allow admission offices to speedily process application materials, sort out non-viable applicants, and select for sought-after characteristics. But despite their obvious utility, increased reliance on AI-tools requires enhanced safeguards against AI-originated discrimination.
In Students for Fair Admissions, Inc. v. Harvard (“SFFA”),[2] the Supreme Court struck down affirmative action in college admissions, ending a 45-year practice of factoring race in university admissions. Harvard’s admissions process used race as a determinative factor for a significant percentage of admitted minority students. The Court found that Harvard’s use of race in admissions favored some applicants to the detriment of others and determined that this practice violated the Civil Rights Act of 1964.
While race-based admissions practices are now off-limits, universities and colleges may still promote diversity by screening applicants for race-neutral diverse experiences. Screening for diverse experiences may disproportionately select one race over another but race itself may never serve as a proxy for the desired experience. For example, universities and colleges can target applicants for whom English is their second language, which may statistically favor one race more than another, but the university or college must select applicants with this trait in a racially blind manner.
Higher education is using large language models (LLMs) in the admissions process to screen applications, admission essays, and other materials, allowing universities and colleges to efficiently process and identify qualified applicants. These models are trained on available and supplied data, fine-tuned by detailed instructions, and reinforced by human feedback.
Despite their utility, use of AI-powered tools such as LLMs may result in racial selection bias. If the training materials supplied to these tools are drawn from past admission practices that actively involved affirmative action or other forms of racial selection bias, higher education institutions may unwittingly be programming unconstitutional biases into the software. Another concern is algorithmic drift; just as humans have capacity to stereotype and assume that members of a certain racial group all have shared individual experiences, AI tools can create similar algorithmic assumptions. LLMs are often designed to seek efficiency, and their pattern recognizing algorithms have the capacity to use race as a proxy for selected experiences, creating a significant risk of a civil rights violation for higher education institutions.
The SFFA decision signals that courts will not take university and college admission processes at face value. Courts will closely scrutinize the methods and means by which admission decisions are made. As AI-tools become increasingly integrated into admissions processes, safeguards are needed to reduce the risk of civil rights violations. One safeguard is periodic algorithmic auditing, a process by which an auditing firm reviews an “instance” of the AI tool being used (a copy of the AI’s algorithm) and tests a large, randomized sample data set using the tool. The auditing firm then analyzes the results and tests for racial biases, and, if bias is found, can then assist in correcting the AI software to avoid future instances of discrimination. Integrating regular and periodic algorithmic auditing into university and college admission processes can therefore help reduce unintentional racial bias and prevent violations of civil rights law.
While AI is a tool of increasing utility and prevalence, it is also a product of its inputs. Bad data can produce bad results, and historically biased admission policies can produce modern biased AI admission tools. A human touch in the admissions process will remain necessary, lest the discrimination of the past be carried into the future. As universities cycle out racial considerations in their application assessments, they must be vigilant to implement enhanced safeguards against unintentionally cycling in the now-unlawful race-based considerations through their AI tools.