Performance-Based Funding in Higher Ed: Friend, Foe, or Just Misunderstood?
Alright, let’s talk about something that’s been shaking up universities for a while now: Performance-Based Funding, or PBF as we’ll call it. You know, the idea that governments tie a chunk of university cash to how well they ‘perform’ on certain metrics. Sounds sensible on the surface, right? Get more bang for the public buck! This whole concept kicked off in the US back in the late 70s, driven by those classic neoliberal ideas about making public services more ‘efficient’ when money’s tight. The big hope was that PBF would nudge Higher Education Institutions (HEIs) to up their game in teaching and research. But here’s the kicker: a whole heap of research shows the positive effects are pretty meh, or even non-existent, and sometimes it causes a whole host of new problems. Yet, we keep seeing PBF pop up, and the assumption that it just works in a straight line rarely gets a proper challenge. So, I’ve dived deep into the research ocean with a meta-narrative review to see what’s really going on and to suggest where we should point our research compass next. We’re not just going to bash neoliberalism again; instead, we’re asking: when does PBF actually work, when does it flop, and why?
How I Tackled This Beast: The Meta-Narrative Approach
To get a grip on this, I used what’s called a meta-narrative review. Fancy term, I know, but it’s super useful when you’ve got a research field where everyone’s looking at the problem slightly differently and using all sorts of methods. It’s like trying to piece together a story from different storytellers – you map out how the field has developed, pull together what we know, and then figure out what questions we still need to ask. The PBF literature is a perfect candidate for this, full of different takes and underlying assumptions. I sifted through a whopping 90 publications from 2011 to 2021. Started with a big trawl through databases like Web of Science and Scopus, and even did a bit of Google Scholar detective work for some niche topics, like how PBF links up with how universities actually manage their money internally. Turns out, that last bit is seriously under-researched, which is a juicy find in itself!
It’s pretty clear that interest in PBF has really ramped up since 2015, though it’s still a bit of a specialist area in higher education studies. Now, higher education is a different beast in every country, reflecting all sorts of political and economic setups. So, where is all this research coming from? Well, back in 2016, folks were saying most of it was US-centric. And while the US still leads the pack (about 37% of studies), we’re seeing a lot more research coming out of European countries like the UK and Italy. What’s really cool is the rise in comparative studies, looking at how PBF plays out across different countries, especially in Europe and the OECD. This is great because it lets us really dig into how different contexts shape PBF and whether we can tailor designs. After wading through all this research, I’ve spotted at least four big themes that keep cropping up. Let’s break them down.
Theme 1: The Good, The Bad, and The Ugly – Effects of PBF
First up, and the most popular kid on the block with nearly half the studies, is the effects of PBF. The general vibe here is pretty critical, often pointing fingers at the neoliberal agenda behind it all. The consensus? If there are positive effects, they’re usually small. But the unintended negative consequences? Oh boy, they’re serious, widespread, and seem almost impossible to avoid. We’re talking about:
- Fewer first-generation students or those from poorer backgrounds getting in.
- Making existing inequalities between universities even worse.
- Giving prestigious universities even more clout with policymakers.
- Universities ‘gaming’ the system to hit targets for teaching and research, sometimes at the expense of real quality.
- Academics feeling like they’ve lost a ton of day-to-day freedom.
- A crazy focus on individual academic ‘entrepreneurship’ and research, often meaning teaching and just being part of the university community gets sidelined.
It’s a mixed bag, really. One fascinating study from South Korea by Jeon and Kim (2018) showed something really nuanced: whether PBF increases inequality in research output depends entirely on how you measure inequality! Use a relative measure, and inequality seems to go down. Use an absolute measure, and it goes up. Both are valid, but they tell totally different stories. This kind of deep thinking about methods is pretty rare, unfortunately. Most studies just give a quick nod to limitations. Another big issue is that many US studies just look at PBF as an ‘on’ or ‘off’ switch, without getting into the nitty-gritty of different PBF models or the specific situations they’re used in.

Theme 2: What Gets Measured, Gets Managed… Or Gamed? The Deal with Indicators
Next, we’ve got about 25 publications looking closely at the indicators and parameters used in PBF systems – you know, the specific things universities are judged on, and how the funding formulas are built. Some researchers are pretty critical, saying these need a serious rethink so universities are judged more fairly, especially considering their different starting points and who they serve. Others suggest ways to classify universities to help governments target their policies better, or even offer up different recipes for funding formulas.
The tricky thing is, what we decide to call ‘performance’ is massively shaped by policy goals and the specific higher education system we’re talking about. It’s an ideological exercise, really. For example, the UK’s Research Excellence Framework (REF) now includes ‘research impact beyond academia.’ Policymakers use this idea of ‘impact’ to make universities accountable for public cash, defining it as some kind of benefit to society, the economy, or culture. And they can tweak how much this ‘impact’ matters by changing how much funding is tied to it.
While a lot of the talk is about ‘research excellence,’ some voices are pushing for education performance metrics to move beyond just counting bums on seats or graduation numbers, and towards actual quality and outcomes. One German study by Krempkow (2015) pointed out something interesting: the usual ‘winners’ in German PBF systems are often traditional research universities that attract students from privileged backgrounds who are likely to graduate anyway. But he argues the ‘real winners’ are newer, teaching-focused institutions that take in more disadvantaged students and actually help them succeed, adding more ‘value.’ He even nods to the Australian PBF model, which tries to account for the socio-economic mix of students. This is a neat counterpoint to findings, especially from the US, that strict PBF systems can actually make student bodies less diverse because they favor students who are seen as ‘easier’ to get through the system.
Theme 3: PBF – More Than Just Numbers, It’s Policy in Action
Then there’s a smaller group of studies, about 11 of them, that analyze PBF as public policy. This is where we look at how these policies even come to be, how they spread, and what policymakers, interest groups, and the folks affected by them actually think. Some researchers do things like content analysis of publications from organizations that push for certain policy solutions, or critical discourse analysis of policy documents to see what ideas are being promoted.
For instance, Ziskin and colleagues (2018) did a fascinating critical discourse analysis of PBF-related documents from Italy, the UK, and two US states. They were hunting for neoliberal economic concepts and how they were used to build and sell PBF policies. They found some common threads: policymakers were definitely trying to get HEIs to change their ways in return for rewards, with a big push for ‘performativity’ – think contributing to economic growth, better student outcomes, and more private funding.
Interestingly, they noticed a shift over time. Earlier on, especially in Europe, market ideas were seen as a good way to modernize higher education. But after 2010, the tune started to change a bit. Even the European Commission’s own science service put out a report warning about overdoing competitive funding, pointing to problems like under-resourced universities and brain drain in Italy. It shows that the conversation around these policies isn’t static; it evolves.
Theme 4: The Alignment Conundrum – Does National PBF Actually Trickle Down?
Finally, and this is a big one for me, only a handful of studies – just 6 out of 90! – explore the alignment (or lack thereof) between national PBF systems and how universities internally allocate resources or manage performance. This is such a crucial piece of the puzzle because it could tell us how, when, and why PBF actually works (or doesn’t) inside an HEI.
What these few studies suggest is pretty telling:
- Often, there’s a lot of talk (rhetorical compliance) about following PBF policies, but in practice, the link between national PBF and what happens inside departments is weak or non-existent.
- Sometimes the implementation is ‘soft,’ meaning individual academics might not even notice the impact of PBF indicators on their careers, or things are so murky they can’t tell what impact it’s having.
- There are huge variations in how PBF is put into practice across different countries, different universities, and even different faculties or departments within the same university!
This last point could be a big reason why studies looking at the overall effects of PBF often find ‘no intended effects’ – they’re looking at too high a level and missing all the messy details underneath.

It’s also striking that not all research in this area even looks at the whole chain – from government, to university admin, to faculty, to department, right down to the individual researcher. Only two studies I found really went there: one by Mouritzen and Opstrup (2020) and another by Aagaard (2015).
Mouritzen and Opstrup did an amazing deep-dive into how the Danish Bibliometric Research Indicator (BRI) was implemented. They used a mix of methods and looked at all levels. Their big takeaway? If they’d only looked at the national level, they would have missed the BRI’s real impact, which only showed up when they dug deeper, partly because universities implemented it so differently. They talk about ‘hard’ versus ‘soft’ implementation. ‘Hard’ means the financial incentives of the BRI really flow down to individual researchers. ‘Soft’ is more about managers using the BRI to influence staff without direct financial ties, like tracking publications or using BRI metrics in performance reviews. They found that places with ‘hard’ implementation had higher research output, but often of ‘lower quality’ (less prestigious publications) and more gaming of the system.
Aagaard’s study in Norway found something counterintuitive: even though the financial incentives of their research indicator were tiny, and policymakers said not to use it for individuals, universities often did just that for monitoring, hiring, and promotions – but with very little transparency and lots of variation.
Basically, this whole ‘alignment’ thing is like the 90% of the iceberg that’s underwater. We’ve barely scratched the surface, and there’s huge potential for more research here. We need to get under the water, so to speak!
So, What’s Wrong with the PBF Picture We Have? Gaps and Shortcomings
This whole review has really shone a light on some big gaps and methodological wobbles in how we’ve been studying PBF. For starters, we’re seriously lacking studies that look at all the different levels – macro (national policy), meso (the university), and micro (the academic) – all at once. And we’re definitely not looking enough at how these levels connect, or align. Only two studies out of ninety did this! That’s a problem because if you only look at the big picture, you can get a totally different story than if you zoom in. The fact that PBF research on ‘effects’ almost always finds a mixed bag might be because we’re losing all the juicy details by only looking at one, usually high, level.
Because we’re not looking at all the levels, we also can’t really tell if there’s any alignment between national PBF policy and what universities actually do with their money and people internally. This ‘chain of alignment’ is a massive blind spot, so we don’t really understand the nitty-gritty of why PBF succeeds or fails in specific universities.
Then there’s this assumption, often lurking in the background (or even stated outright!), that PBF works like a simple lever: pull here, get result there. It’s a very linear way of thinking. Now, many studies rightly critique the whole New Public Management (NPM) thinking that PBF comes from, but then, ironically, they often fall back on these linear assumptions because it makes the analysis easier. Easier, maybe, but it often leads to oversimplified explanations, like blaming PBF failures on ‘weak institutional capacity.’ You know, the argument that under-resourced universities, especially those helping disadvantaged students, are just bound to do badly under PBF. But we can’t really prove that if we’re not looking at whether better alignment within those universities could actually help them, or if even well-resourced universities might struggle if their internal systems are all over the place. These are big, unanswered questions.
Beyond the Blame Game: Conceptual Conundrums and the Quest for Alternatives
The research also throws up some pretty hefty conceptual and theoretical head-scratchers. If PBF is so flawed, what are the alternatives? We hear a lot of criticism, but not so many concrete suggestions. And how could we tweak how universities are run internally to make PBF actually useful for everyone – academics, universities, and governments – in the long run?
One pretty radical take from Mizrahi (2020) uses game theory to argue that higher education systems just don’t have the right conditions for PBF to work. He says it’s doomed to fail because:
- The ‘principals’ (like governments) often can’t properly monitor the ‘agents’ (universities) cheaply or effectively.
- The ‘agents’ can game the system pretty easily without much fear of getting caught or suffering real consequences.
- PBF can’t fix the tangled web of accountability where an institution can be both an agent (to the government) and a principal (to its own faculties), leading to conflicting priorities.
Mizrahi’s solution? Rebuild trust. Focus on shared professional values – like good teaching and good research – that both principals and agents can agree on. If everyone believes in what’s being measured, you cut down on gaming and resistance.
So, how do we build that trust while still keeping an eye on performance and accountability? One idea floating around is performance agreements. These are basically negotiated contracts between universities and funding bodies, outlining goals. Because they’re negotiated, universities get a say in what’s measured. They’re seen as more trust-based than rigid formulas and can be tailored to a university’s unique mission. In fact, a bunch of EU countries have been moving this way. But, they’re not a silver bullet – there’s still room for gaming, and power imbalances can creep in.
Another approach is co-designing the PBF system, like they did in Finland. Getting universities and other stakeholders involved from the start led to more buy-in. Still, it didn’t stop all the arguments about the details or the feeling that it’s a zero-sum game where some universities ‘win’ at others’ expense.
This brings up a critical question: is the ‘autonomy’ offered by things like performance agreements real, or just an illusion? Mizrahi argues that once PBF brings market logic into universities, they’re forced to play by market rules, often prioritizing profit over scholarly values. It’s an ‘autonomy paradox.’ Some might see these efforts to soften PBF as ‘mutant neoliberalism’ – basically, dressing up economic goals in non-economic clothes. Or, more optimistically, maybe it’s a genuine attempt to balance everyone’s interests. Either way, it shows the limits of market-driven governance and why we need to think about what comes next.

Charting a New Course: A Fresh Research Agenda for PBF
So, with all this in mind, where should future research on PBF set its sights? I reckon we need to:
- Move Beyond Just Bashing Neoliberalism: Instead of just repeating critiques, let’s explore if PBF can be transformed into something that actually benefits academics, universities, AND governments – a ‘win-win-win.’ Is PBF inherently broken, or are our current critiques just a reflection of how poorly we’ve studied it so far?
- Get Holistic with Our Methods: We absolutely must start looking at the macro, meso, and micro levels all together. We need to trace that whole chain of alignment from national policy right down to what happens in individual departments. Mixed-methods designs, combining the numbers with the stories, will give us much richer insights. This could show us if alignment (or the lack of it) is a bigger deal for PBF outcomes than we thought.
- Look Beyond Our Own Backyard: Higher education researchers could learn a lot by engaging more with the broader public policy literature. It often has solutions and perspectives that go beyond the usual NPM critiques and can offer fresh insights into how tools like PBF interact with the unique world of universities.
- Dare to Dream Post-Neoliberal: While tweaking existing PBF systems might help a bit, it might not be enough to fix the deeper issues tied to market-driven governance. Future research should really start thinking about and modeling alternative governance structures that can balance accountability and performance with protecting core academic values. This is about building more sustainable and fair higher education systems.
By tackling these gaps, I truly believe future research can give us a much deeper, more nuanced understanding of PBF. It’s not just about advancing academic debates; it’s about providing real, actionable insights for policymakers and universities who are trying to make higher education better for everyone.
Source: Springer
