A Better Way to Measure and Rank Medical Schools

— Good assessment is about rating, not ranking

MedicalToday
A photo of medical training dummies in a medical school.

We love to rank. There's something thrilling and even validating about seeing if our own perceptions of value and quality match up with public opinion. We rank our favorite neighborhood coffee shops. We rank the best . We even rank our .

One would think that the way we rank our institutions of higher education should be far less trivial and far more meaningful. Instead, popular rankings have been largely subjective. Rankings, when subjective, serve the purpose of stirring discussions with little room for accountability. I assert that a rating system would be more effective than a ranking system when it comes to measuring what matters. Especially when it comes to the schools that train our national health workforce.

Rankings give us comparisons of performances without any assurance of quality. On the other hand, when we rely on ratings we're looking at the quality of an item: does it cross the threshold for performing how it should be performing?

Recent months have seen multiple schools spurn the U.S. News and World Report annual rankings, citing incongruence between the factors of the rankings and the values of schools, their prospective students, and their communities. These notable defections come in the wake of national conversations about how universities operate and how their practices -- in consort with local and national policy -- hurt or help equity and the mission of education.

Leaders of medical schools nationwide are recognizing the flawed structure. One administrator that their program's departure was "not because of concerns that these rankings are sometimes based on data that can be inaccurate or misleading, but because the rankings measure the wrong things." Another that the rankings "perpetuate a narrow and elitist perspective on medical education...rather than measuring a school's success in educating a diverse and well-trained cohort of doctors able to change medicine for the better and meet society's needs."

At their best, rankings reflect the value that consumers place on a product or service. At their worst, they become political theater. They can be instructive. They can shape budding opinions of new arrivals; prospective learners who are, for the first time, being presented with college options. Student decisions on which program to select will invariably impact their debt, their experience, and -- in the case of healthcare students -- where and how they provide care. But between the best and the worst of rankings is where we find ourselves today: a bloated beauty pageant.

A November 2022 in STAT News highlighted the shortcomings of the popular U.S. News rankings, noting that research dollars, reputation, and low acceptance rate are glorified over metrics that actually indicate if a medical school is performing their responsibilities to society.

According to the Association of American Medical Colleges, the is to educate tomorrow's doctors and prepare them to meet society's evolving health needs. The recent exodus of schools from the U.S. News rankings indicates, to me, that many school leaders do not agree with the rankings as representative of their value add to society -- and they are right.

The central problem with the way we currently rank our medical schools is that they measure privilege more than purpose. The rankings provided by the U.S. News and World Report calculate such as peer assessment, student selectivity, and MCAT scores. Exclusive access to a school may be a strong indicator of public demand, but that in and of itself is perverse. That demand is then increased further by rankings that place a disproportionate value on public perception, making it more appealing based on visibility instead of effectiveness.

If schools state that their missions are rooted in social responsiveness and , shouldn't schools be rated on how well they achieve their stated vision?

A in the Annals of Internal Medicine offered an alternative way to measure and rank medical schools. The authors took into account factors that matter when it comes to the way that schools help close wide health disparity gaps -- namely the percentage of graduates who practice primary care, work in health professional shortage areas, and are underrepresented minorities -- and combined those factors into a composite social mission score. Even without an ordinal ranking list, rating how schools are doing in these areas gives us a better idea of if and how they contribute to a social mission.

There have to be better ways to convey to prospective students -- and everyone, for that matter -- how schools perform when it comes to what actually matters for the health and betterment of society. Medical schools in particular are high-leverage opportunities since our entire physician workforce passes through this space. The same goes for schools of nursing, dentistry, pharmacy, and public health.

In fact, a rating system would be far more effective to that end than a ranking system. No system is perfect, and rankings can be informative, but they do not always provide room for nuance. Rating schools by their commitment to social mission can offer more of both. The key difference being that while rankings incentivize comparison against others, ratings encourage measuring your performance against the social mission of health education.

If we change the way we measure and rank in health and medical education, will likely follow suit. There is a more just way to measure what matters -- the commitment toward social mission needs to be made.

is executive director of Social Mission Alliance, senior Atlantic fellow for health equity at George Washington University, and lecturer of family and community medicine at the University of New Mexico.