
In the aftermath of the pandemic, changes in people’s travel patterns and public transportation use have led regional transit organizations to reexamine their resource allocation. Many have turned to artificial intelligence and machine learning programs that can take into account the numerous and complex variables that must be considered to make decisions about things like bus routes, wait times and fee collection. While these programs can process volumes of data to aid decision making, their use has also raised concerns about the ease with which bias can creep into their recommendations.
Zhiwei Chen, PhD, an assistant professor in Drexel’s College of Engineering, studies efficiency, impartiality, sustainability and resilience in transportation. His Connected & Automated Mobility Lab focuses on how best to ensure emerging technologies like AI and autonomous vehicles, can be deployed in transportation systems, including public transit, to maximize societal benefits. Chen’s recent paper offered a solution for ensuring that travel choice prediction programs not only embed fairness in the recommendation process, but can show their work to verify it.
Chen recently shared his insights with the Drexel News Blog about how machine learning programs are being used and what can be done to prevent them from amplifying bias in decision-making processes.
Can you tell us a bit more about how travel choice prediction programs/algorithms are currently being used?
Travel choice prediction models, such as mode choice models, are part of the essential modeling tools in transportation planning. Agencies including metropolitan planning organizations (Delaware Valley Regional Planning Commission in the Greater Philadelphia region), regional/rural planning organizations and state departments of transportation use them to forecast how people are likely to travel under different scenarios. For example, they will use the models to project whether a new transit line will attract drivers, or how pricing and tolling policies might affect car versus transit use.
While many agencies still rely primarily on traditional econometric models, there’s a growing interest in machine learning and deep learning models because they can capture more complex behavior patterns and achieve higher predictive accuracy.
What sorts of decisions do the organizations make based on guidance from travel choice prediction programs?
These predictions directly shape high-stakes decisions: how much investment to put into public transit, where to add roadway capacity, whether to fund active travel infrastructure (bike/pedestrian), or how to evaluate accessibility for disadvantaged communities.
If the models are biased, then those downstream investment and policy decisions may inadvertently fail to satisfy the travel needs of certain communities.
Are there some well-known examples of how bias or unfairness inherent in current programs have impacted communities?
I use two examples in my own teaching on transportation system planning, one on resource allocation and the other on cost, or burden, distribution.
In 1994, the Labor/Community Strategy Center, together with other community organizations and residents of Los Angeles County, filed a Title VI civil rights class action against the Los Angeles County Metropolitan Transportation Authority (LA Metro). The suit alleged that LA Metro unlawfully discriminated against inner-city, transit-dependent bus riders in how it allocated public transportation resources.
Starting with the Federal-Aid Highway Act of 1956, many roads were built through low-income neighborhoods. This destroyed homes and businesses, pushed more than a million people out, and split communities apart. Highways were often routed through communities of color, which increased pollution, widened economic gaps and reinforced segregation.
How might these situations have turned out differently if fairness were baked into the decision-making process?
For example, LA Metro could have run formal hypothesis tests by route/line and rider group to detect any proposed budget or service plan that reduces service quality for transit-dependent riders beyond a preset disparity threshold. Any plan that failed the test would be redesigned — for example, to restore frequency/capacity, add bus priority — until the gap passes. This way, rail investments could still proceed, but only after bus service quality gaps were mitigated to pass the tests.
If this process had been employed, the pattern that triggered the Title VI lawsuit and consent decree would have been corrected up front — with more vehicles on core routes, bus-priority lanes, better span/frequency — not years later, under a court order.
Could the framework you developed be used to fix other algorithms/programs that have experienced similar problems with bias?
Yes, our framework is general. It is designed as a post-processing “fairness layer” that can sit on top of any deep learning classifier. This means it could be adapted to other AI applications where disparities in predictive accuracy across groups are a concern. For example, it could be used to support traffic safety risk models, travel demand forecasting, or even applications outside of transportation, such as energy-choice modeling, health care triage, public housing waitlists — wherever predictive accuracy or error rates differ by group.
What’s the next step for this work? Have any organizations committed to using/testing this new framework?
This paper is a building block of a larger research agenda. My collaborators and I are now extending the framework to more complex, multi-class prediction problems such as distinguishing among many different travel modes, not just car vs. transit. We are also exploring integration into large-scale regional travel demand models. While no agency has formally adopted the framework yet, we are in early conversations with regional partners who are interested in testing fairness-aware AI tools in planning practice.
Reporters interested in speaking with Chen, should reach out to Britt Faulstick, executive director of News & Media Relations, bef29@drexel.edu.

