© 2025 KVPR | Valley Public Radio - White Ash Broadcasting, Inc. :: 89.3 Fresno / 89.1 Bakersfield
89.3 Fresno | 89.1 Bakersfield
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How California community colleges are using AI to battle financial aid fraud

Foothill College is one of a number of campuses across California that are using artificial intelligence to detect financial aid scammers.
Barbara Kinney via EdSource
Foothill College is one of a number of campuses across California that are using artificial intelligence to detect financial aid scammers.

Community colleges in California for years have been fighting a losing battle with fraudsters who have stolen millions of dollars in federal and state financial aid. But now state officials believe they are finally turning a corner thanks to new tools.

The game-changer? Artificial intelligence.

About 80 of the state’s 115 community colleges are now or will soon be using an AI model that detects fake students by looking for information such as shared phone numbers, suspicious course-taking patterns, and even an applicant’s age. Officials say the AI catches twice as many scammers as the human staff, with some campuses estimating that they are now detecting more than 90% of fraudsters, who are a mix of bots and human criminals, sometimes even located in other countries.

California’s community colleges have lost at least $18 million in aid since 2021, though the actual total could be much higher since colleges were not as skilled at identifying fraud early in the pandemic as they are now. Most of the stolen funds are federal aid, which the local colleges are responsible for distributing. But scammers have also grabbed millions in state aid, including Cal Grants and institutional awards.

The colleges reported losing more than $11 million to financial aid fraud in 2024 as they were inundated with fake students. That year, at least 31% of applicants statewide were fraudulent. Officials predict the aid lost to fraud this year will be much less at campuses using the AI, and that they will further reduce fraud in 2026, when the AI is fully implemented at more colleges.

“When our colleges are looking record by record, they’re not seeing the connections or patterns. But AI is really good at finding those things,” said Jory Hadsell, a visiting executive in the California Community Colleges Chancellor’s Office. Before joining the chancellor’s office this year to help combat the fraud problem, he helped develop the AI model as vice chancellor of technology for the Foothill-De Anza Community College District.

To make it easier for real students to verify their identity when applying, the community colleges are also collaborating with the California Department of Motor Vehicles and plan to use the department’s new mobile ID system. Students with a California license or ID card can upload it through the DMV’s wallet app. They can then use that to verify their identity when they apply, rather than verifying with ID.me, the college system’s default platform.

“The DMV is like the holy grail of identity in California,” Hadsell said. “These pieces all fit together to create this layered approach, and somewhere along the line, we’re going to stop most of this fraud.”

Community colleges are susceptible to fraud because they are generally open access for enrollment and don’t deny admission to students who meet basic requirements. California’s community college system, in particular, is a big target for fraudsters due to its massive size.

Fraud was worsened by the pandemic, with the shift to mostly remote instruction creating an easy opportunity for online scammers pretending to be real students. It has ramifications not just for colleges but also for their legitimate students, who are sometimes left on waitlists for key classes because scammers initially take up many of the seats.

Even as colleges have learned more about the fraudsters, keeping them out has proven challenging. Online criminals are sophisticated and usually one step ahead of typically understaffed campuses. Officials believe, however, that the AI tool could change that.

In February 2024, the Foothill-De Anza district in Santa Clara County signed a contract for $56,250 with N2N Services, a software company that created an AI platform called Lightleap. The district at the time intended to use it mostly for course predictions, which involves guessing which classes a student will enroll in based on their previous coursework.

But the fraud problem “kind of hit us in the face,” Hadsell said. “So we said, ‘All right, let’s see what we can do with fraud.'”

Hadsell and his staff used data from the district’s two colleges — Foothill and De Anza — to train the AI model to detect fraud. The tool’s features include device fingerprinting, a method of identifying information about the device accessing the application, such as IP addresses and the user’s time zone. It also considers other information, like the student’s age and whether their phone number or email address matches an applicant at another school.

When they compared it to their homegrown system of detecting fraud, they found that the AI tool was catching twice as many scammers. In the first round, they dropped about 1,000 suspected fraudsters and didn’t hear from any students claiming they had been wrongly identified as a fraudster.

They have since refined and expanded the model, which uses a scoring system to determine if it believes a student is fake.

College leaders say the tools will also help them more easily comply with the Trump administration’s new verification rules that will require more first-time applicants across the country to show identification.

The AI model now has three stages: one when students are applying, another when they register for courses, and a third when they apply for financial aid. The hope is that even if a scammer gets through the application process, the AI will catch them when they register for a class or apply for aid.

After implementing it in the Foothill-De Anza district, officials shared it with the nearby West Valley-Mission Community College District and their two colleges. Eventually, college presidents from across the state started calling and asking if they could use the AI.

That included Meridith Randall, the president of Golden West College in Huntington Beach.

Like many others, Golden West has been plagued by fraudsters in recent years. The college was hit particularly hard in the summer of 2023, shortly before the fall term. Officials there at the time had a “moment of euphoria,” thinking their enrollments were climbing, with many of their classes completely full.

“And then you realize these are not real students,” Randall said.

At that time, Randall said the staff started to grow suspicious when they noticed classes were filling in subjects near the top of the alphabet, like accounting and business, while other classes had open seats. They also noticed the course-taking patterns of some students “made no sense,” such as a student enrolling in accounting, but also a jewelry-making class, Randall said.

In several classes, staff and faculty would eventually identify between 10 and 15 scammers and drop them. Other fraudsters likely made it through unscathed, since the college at the time was relying on human staff who Randall acknowledged “were burning out.”

Golden West implemented the AI tool ahead of the spring 2025 semester and immediately noticed a change: instead of 10 or 15 fraudsters getting flagged, faculty now catch a maximum of two or three fake students who need to be dropped. Randall estimates that the AI tool has eliminated 96% of fraud.

In the past, Randall recalls fraudsters constantly evolving and outpacing the detection tactics of human staff. “Three years ago, we were looking for, is this even a real address? Well, as the fraudsters got more sophisticated, they started using real addresses, so then you had to look for something else,” she said. “AI can learn the patterns and recognize them much faster than a human.”

As AI is introduced at more colleges across the state, it’s likely to become even better at detecting fraud, Hadsell said. More data and more information mean AI learns more about what fraud looks like.

“That’s all just continuously training the model. If it detects someone at Meridith’s institution that also has an application in another institution, it can flag that in both places,” Hadsell said. “It’s like a network effect across the system. It amplifies our ability to stop it.”

Michael covers higher education. Prior to joining EdSource, he was a reporter in Washington, D.C. He received a B.A. in journalism from Syracuse University.