The Price of Playing Nice: How Community and Costs Shape Our Teamwork
Hey there! Ever wondered why getting everyone to play nice and cooperate can be so darn tricky, especially when we’re talking about big groups of folks who don’t really know each other? It’s a classic puzzle, right? We all know that teamwork makes the dream work, but sometimes, going it alone seems mighty tempting. This is where the idea of community enforcement comes into play – basically, how groups can nudge their members towards cooperation, even when there’s no big boss watching over everyone.
Think about it: in small, tight-knit communities, if someone steps out of line, word gets around, and direct consequences usually follow. But what about in larger societies, where we’re often just “strangers in a crowd”? That’s where things get interesting, and where researchers like Kandori (way back in ’92) and Ellison (’94) did some groundbreaking work. They explored how, even among anonymous individuals, cooperation can be upheld by the threat of community-wide punishment. It’s like a social contagion – one bad apple can spoil the bunch, but the fear of everyone turning their back can also keep potential bad apples in line.
The Balancing Act: Temptation vs. The Greater Good
Now, this whole “contagious equilibrium,” as it’s called, is a bit of a delicate dance. It hinges on a couple of things. If folks are too impatient (what economists call a “low discount factor”), they might just grab the immediate benefits of not cooperating. This is the “first-order social dilemma” (FOSD) – the classic temptation to be selfish.
On the flip side, if people are too patient (a “high discount factor”), they might think, “Hmm, why should I be the one to punish this person? It’s costly for me, and it might mess up our overall cooperative vibe.” This is the “second-order social dilemma” (SOSD) – the reluctance to enforce the rules, even if you believe in them. It’s like, “I want cooperation, but I don’t want to be the bad guy who starts the punishment train.”
In our work, we decided to look at these dilemmas from a slightly different angle. Instead of just focusing on how patient people are (which is a pretty personal and fixed thing), we zoomed in on the cost of cooperation. Think about it – the cost of doing the “right thing” can change depending on the situation. Sometimes it’s a small sacrifice, other times it’s a big one. And importantly, these costs can often be influenced or even chosen.
Why Costs Are Key
By shifting our focus to the cost of cooperation, we found some pretty cool things. We’re talking about how the actual price tag of being a team player affects whether community enforcement works, and this is largely independent of how patient people are or even how big the group is! This is exciting because if costs are something we can adjust, maybe we can design environments where cooperation is more likely to flourish.
We used a standard setup for this kind of research: a bunch of anonymous individuals randomly paired up to play the classic Prisoner’s Dilemma game, over and over. The social rule we looked at was the “grim” strategy – you cooperate until someone defects, and then you (and everyone who sees a defection) punish by defecting forever after. Sounds harsh, I know, but it’s a good way to model strong community norms.
Our big question was: how does the relative cost of cooperation (that is, the cost compared to the benefit) change people’s willingness to stick to this “cooperate and punish” norm?

Unpacking the Second-Order Social Dilemma (SOSD)
One of our first big “aha!” moments came with the SOSD – that reluctance to punish. We found something pretty striking: if the cost of cooperation is above a certain, specific threshold, then punishing bad behavior actually becomes the best thing to do for yourself, no matter how patient you are or how many people are in the group! And get this – this threshold isn’t even that high. The SOSD really only rears its head when the relative cost of cooperation is quite low (specifically, below a point that never goes over 50% of the benefit).
So, if making a cooperative move costs you, say, 60 cents for every dollar of benefit it creates for someone else, you’re generally going to be willing to punish someone who breaks the cooperative spirit. It’s when cooperation is super cheap that people start to think, “Eh, maybe I’ll let this one slide.” This is a big deal because it tells us that the problem of people not wanting to punish isn’t as widespread as we might have thought, especially if cooperation has a reasonable, but not insignificant, price tag.
Finding the “Sweet Spot”: The Target Cost of Cooperation
Then we tackled both dilemmas – the FOSD (temptation to defect) and the SOSD (reluctance to punish) – together. And here’s another cool finding: for any group size and any level of patience, there’s a “target cost of cooperation” that makes community enforcement work like a charm. If the cost of cooperation hits this sweet spot, both the temptation to cheat and the reluctance to punish fade away.
This is super useful, especially in situations where the costs of cooperation can actually be chosen or influenced. Imagine if a community could somehow set the “price” of helping each other. If they can aim for this target cost, they can turn the tricky Prisoner’s Dilemma from a social problem into more of a coordination game – everyone just needs to agree on this “right” cost, and cooperation can follow.
This flips some conventional wisdom on its head. Often, we hear that high costs of cooperation are bad for teamwork. But we found that sometimes, a proper increase in cooperation costs can actually bolster cooperation by making the punishment threat more credible. It’s when cooperation is too cheap that things can get a bit wobbly, with people being more lenient towards rule-breakers.
A Peek Under the Hood: The Model
For those who like the nitty-gritty, our model involves a group of (N) individuals who live forever (in theory!). In each round, they’re randomly matched in pairs and play a Prisoner’s Dilemma. They can either Cooperate (C) or Defect (D). Cooperating costs you ‘c’ but gives your partner ‘b’ (where b > c > 0). Defecting costs nothing and gives nothing. We simplify this by looking at the cost-to-benefit ratio, (gamma = c/b), which is our “cost of cooperation.”
Everyone follows the grim strategy:
- Start by cooperating.
- If you see a defection, you defect in all future periods.
This creates two states: “cooperation” and “punishment.” A defection triggers a contagious spread of punishment throughout the community.

The challenge, as we said, is that if the discount factor (delta) (how much you value future payoffs) is too low, you might defect. If it’s too high, you might not want to punish. Our work shows how adjusting (gamma) can help navigate these issues.
When Does Cooperation Stick? (Proposition 1)
We found that for cooperation to be a stable outcome (a “subgame perfect equilibrium,” in game theory speak), the cost of cooperation (gamma) needs to be in a specific range. It has to be high enough so that you’re willing to punish (solving the SOSD), but not so high that you’d rather just defect from the get-go (solving the FOSD). This range depends on the discount factor (delta) and the population size (N).
If you imagine a graph with the discount factor on one axis and the cost of cooperation on the other (like our Figure 1 in the paper), the “sweet spot” for cooperation is a kind of corridor. As people get more patient (higher (delta)) or as the cost of cooperation gets higher (higher (gamma)), this corridor for stable cooperation tends to expand. But it’s still a specific zone, meaning that just any old cost or patience level won’t do.
Solving the Punishment Problem (Theorem 1)
Remember how we said the SOSD (not wanting to punish) isn’t always a huge issue? Theorem 1 in our paper really nails this down. It states there’s a threshold cost of cooperation, let’s call it (overline{gamma}), which is always less than one-half (meaning the cost is less than half the benefit). If your actual cost of cooperation (gamma) is at or above this (overline{gamma}), then punishing defectors is always your best move, regardless of your patience ((delta)) or the group size ((N))!
Think about that! It means that for a big chunk of possible scenarios (where cooperation isn’t dirt cheap), the threat of community punishment is totally credible. Figure 2 in our paper shows how this threshold (overline{gamma}) behaves; it actually settles to a value below 0.5 pretty quickly as the population grows. So, if (gamma ge overline{gamma}), the SOSD just vanishes. Players only need to worry about whether it’s worth cooperating in the first place (the FOSD).

The Magic Number: Our Target Cost of Cooperation (Proposition 2)
This is where things get really practical. We identified a specific value for the cost of cooperation, which we call the target cost of cooperation. Let’s label it (Phi_2(delta)) – it depends on how patient people are. If the actual cost of cooperation (gamma) is set to this (Phi_2(delta)), then the grim strategy works beautifully to maintain cooperation.
Why? Because this target cost is perfectly balanced. It’s high enough to make sure people are willing to punish (it’s greater than the minimum needed to solve the SOSD), but it’s also low enough to make sure people still want to cooperate in the first place (it’s less than the maximum allowed before the FOSD kicks in). Our Figure 3 illustrates this target cost – it’s a line that sits comfortably within that “cooperation corridor” we talked about earlier. As players get more patient, this target cost of cooperation actually goes up. This makes sense: if you value the future more, the “sacrifice” of cooperating today (and thus forgoing an immediate selfish gain) feels like a bigger cost, so the system can sustain a higher objective cost.
What If We Can CHOOSE the Cost? (An Extension)
Now, let’s get even more creative. What if, before the game even starts, players can choose how “costly” their cooperative actions will be? And let’s say the benefit they provide to others is linked to this cost – maybe more costly cooperation provides a bigger benefit (up to a point, of course, because of diminishing returns).
In this scenario, that target cost of cooperation becomes a super powerful signal! We proposed an “extended” grim strategy:
- In an initial stage, everyone chooses to set their cost of cooperation to the target level, (c^*), where (c^*/b(c^*) = Phi_2(delta)).
- Then, in the game itself, you cooperate as long as everyone chose (c^*) and no one has defected. If someone chose a different cost, or if someone defects in the game, you switch to defecting forever.
Guess what? This strategy works! (That’s our Proposition 3). By choosing this specific cost (c^*), players signal their willingness to cooperate and to uphold the norm. If someone deviates, either by picking the wrong cost or by defecting later, the punishment kicks in. This turns the whole thing into a coordination problem: if everyone can coordinate on this target cost, cooperation can be sustained. The beauty is that this (c^*) is determined by things like patience and group size, making it a clear focal point.

Interestingly, this target cost (c^*) might not be the most “efficient” cost in terms of maximizing total payoff. But it’s the cost that makes the enforcement mechanism work. There’s a kind of “coordination cost” involved – the difference between the payoff at this target cost and the payoff at the most efficient cost. This is the price individuals are willing to bear to ensure that the system of community punishment is effective, especially when they can’t directly see everything or meet the same people repeatedly.
So, What’s the Takeaway?
The old wisdom said that cooperation among strangers needs patient individuals. Our work adds a crucial layer: the cost of cooperation itself is a massive lever.
- We found that the fear of punishment (the SOSD) is less of a hurdle than we thought, especially if cooperation isn’t super cheap. There’s a clear threshold (that’s less than 50% cost-to-benefit) above which punishment is credible, no matter what.
- Even better, for any situation, there’s a target cost of cooperation that can make the whole system click, solving both the temptation to cheat and the reluctance to punish.
When costs are something we can choose or influence, this target cost can act as a powerful coordination device. It suggests that cooperation among strangers is more likely if we can actively shape the “price” of playing nice. It’s not just about being patient; it’s about setting up the game so that doing the right thing, and making sure others do too, is in everyone’s best interest. It’s a rather optimistic thought, isn’t it? That by understanding these dynamics, we can perhaps build systems and communities where cooperation isn’t just a hopeful wish, but a robust reality.

Source: Springer
