When I read about the Utrecht move, my first thought was, I wonder what allowed them to break away? I started to think about other academic systems that have a large presence in global science. Could something like this have happened elsewhere?
How about in the USA? Probably not.
Interestingly, world rankings and other research-based public metrics are not as influential in the US as they are in other countries. And the major US ranking system, US News and World Report,
doesn’t measure research directly. When Barrett Taylor and I were working on our book Unequal Higher Education,
I noticed a difference between the way people from the US and other parts of the world responded to our work. The book uses a latent variable analysis to categorize colleges and universities in the US. Every non-US audience asked us why we didn’t include a research variable in our model, and no US audience asked that question. The simple answer is the same for US News
. We didn’t have to. Research largely covaries with other measures like admission selectivity and resource intensity, so other measures capture it without observing it directly. So far as I know, few academic departments or universities maintain explicit guidance on using metrics like H-index for making hiring and promotion decisions. However, such criteria are absolutely informally used.
Given the indirect use of research in public metrics and formal procedures, maybe the US is a good place to break out of the metrics rat race? Nope. Probably not. Publish or perish remains the standard for academics are research universities and at not-quite research universities in the US. And the AAU is a powerful anchor. AAU indicators
prioritize research metrics, like a lot. Check em out. Marginal AAU members fear losing membership (which happened not too long ago to the University of Nebraska and Syracuse). Major-research universities on the cusp aspire to membership. All other universities with a research mission or pretenses just try to tread water and stay in the pool (my metaphors are dreadful). Groups like the University Innovation Alliance
promote collaboration but focus on undergraduate education and don’t threaten research competition. Arizona State’s President Michael Crow advocates for a New American University and Fifth Wave University
, which shares some ideas with the Utrecht plan. However, a skeptic might say that Crow’s model reflects the inability of upstart public universities to keep up with the most established research universities rather than a force that can disrupt the model that benefits the incumbent universities. ASU is not an AAU member, after all.
How about China? Probably not.
China is in the midst of a massive research catch-up project that began in earnest in the 1990s. The current program, the Double World-Class Project
is designed to bolster China’s portion. It focuses on academic disciplines and Chinese universities on the world stage. The plan has a long time horizon, to 2050, and with previous investment in research, it seems to be working. As a result, Chinese science is taking off and now produces more academic papers than any other country.
Metrics are a big part of Chinese higher education, and that is probably not going to change soon. Along with the Double World-Class Project, the Belt and Road foreign policy feature higher education components. Both seek to expand China’s cultural as well as economic influence and cement its status as a super-power. I am not an expert on Chinese higher education, but I don’t see a moment to break from metrics emerging from Beijing.
How about some of the larger European countries like Germany, France, or Italy? Probably not.
To be fair, these countries are not on the top of my mind when I think of competition in university rankings or academic metrics. This is partly because a lot of work happens in national languages, which is less recognized by the metrics, though most science papers are in English no matter where the researchers are from. My understanding is that the French are a bit upset about their lackluster performance in world rankings (see this example
; anyway, this is my impression, I don’t have deep insights here), but the Germans are less concerned (again, my impression). Italian higher education remains somewhat of a mystery to me (probably I shouldn’t admit this). These systems have not adopted the American research university model, which has been taken up more or less in China and much of the world. They don’t seem well-positioned to implement and propagate an organizational innovation like the one proposed in Utrecht. I could well be wrong. Let me know if I am and why.
How about the UK or Australia? Probably not.
Australia’s higher education systems is facing deep and painful cuts
. The cuts have the potential to weaken the sector for a while. And besides Australian research funding prioritizes metric outputs. As the government research funding page explains, “Research performance is also often a basis for academic hiring and promotion, acting as an incentive (beyond inherent motivation) for individual researchers to maintain research activity.” Seems an unlikely candidate to bust out of the global metrics system.
Same for the UK, though it’s not getting the same hit Australia is. But the Research Excellence Framework
, is a hyper-competitive periodic evaluation of research performance to the department level that determine research funding for nearly a decade. The REF is underway now for the first time since 2014. This system locks the UK into metrics-attention for the foreseeable future, I think.
Others? I don’t know, probably not? This is not an exhaustive analysis.
So what’s the point?
If Utrecht is able to successfully implement a policy of excluding research metrics like impact factors, H-indexes, and grant Euros from hiring and promotion decisions, it would seem distinctly positioned. Utrecht would seem secure in its position to take the risk and free to do it because of governance arrangements that give universities autonomy and funding structures that seemingly allow it to make a move.
Burton Clark’s famous triangular coordination model positions higher education between controlling forces. Simon Marginson summarized the model this way in an open access book
Clark locates three Weberian ideal types at the points of the triangle: systems driven by states, systems driven by market forces, and systems driven by academic oligarchies. He positions each national higher education system within the triangle, with the United States closest to market coordination, Soviet Russia closest to state control, Italy closest to academic oligarchy, and so on.
The model is imperfect and needs updating (well, it’s been updated a million times, so maybe not?), but it points to a real challenge in how to assess academic work and distribute opportunities and rewards. Over the past several decades, direct state control of higher education has given way to more autonomous forms of governance. Even in party-state China, which features strong central planning, the system is too large and complex for direct control by the central government, though the party plays a large role in university governance. At the same time, the academic oligarchy (control of the university by professors) is seen as increasingly unacceptable by society and by the non-professor academics in the systems where professors have the most power. Most societies want some type of social responsiveness from higher education and do not accept that professors can occupy the university for themselves. Even if we worry about contingency and a weakening academic profession, which I do worry about, I think it’s also untenable to say that the faculty have no responsibility to anyone other than other professors, even if we vigorously defend academic freedom. With the state steering higher education from more of a distance and the academic oligarchy unable to retain supreme control in many systems, “the market,” or something market adjacent, has taken a stronger role in system steering. And that is where metrics madness comes in.
Metics also work because they tap into individual academics’ sense of competition and drive. Even critical academics who are vigorously opposed to neoliberal evaluations themselves are often individually ambitious and want to influence their peers. Academics, we are a vain lot, we want recognition (I mean, come on, I am writing this thing at 10:00 PM on a holiday because why? Partly because I like the sound of my own keyboard clacking away).
So now I appear to be drifting … so, let’s try to tie this thing up. Here is how I think Utrecht’s model might work. If departments are able to establish consensus methods of evaluating individual and collective work based on the standards of openness and social responsiveness that value teaching and research, and public engagement and service, then good for them. If they are able to get enough good action from other academics, and media, and government that they can persuade other universities to do the same, then the approach could spread in Northern Europe to start. How it gets beyond the region is a sticking point. I think they’d have to re-set worldwide cultural expectations about academia. To do that, I think funding and reward structures would have to change to reflect and encoring a shifting culture. Maybe it’s possible in this super vague model I have outlined. Maybe. But inertia is strong.
A favor, please.
If you read all the way to this point, then you probably liked or hated what I had to say (the internet has lots of hate reading). Either way, please consider subscribing to my newsletter blog thing-y and sharing this post on social media. Cheers!