Uncertainty Isn’t Just Lack of Info: It’s Disagreement (And Why That Matters for Decisions)
Let’s face it, life is a bit of a minefield, isn’t it? We’re constantly wading through doubts. What should I have for dinner? Should I take that job? Is this the right thing to do, morally speaking? Uncertainty is everywhere, from the tiny choices to the big, hairy societal issues. And honestly, it can be pretty paralysing.
You know that feeling, right? When you’re just not sure. Not sure what you want, not sure what’s going to happen, not sure what action to take. And yet, despite all this swirling doubt, we still have to *do* things. We need to make decisions, and hopefully, make them effectively.
Because uncertainty is such a huge part of navigating the world, it pops up in loads of different fields – philosophy, economics, medicine, even climate science. Everyone’s got their own take on it, their own labels: risk, ignorance, ambiguity, deep uncertainty, epistemic, ontic… the list goes on and on. It can get a bit overwhelming.
My starting point, and the thing that really got me thinking, is that uncertainty is problematic precisely because it makes choosing hard. We need to move through the world, make things happen, and uncertainty throws a wrench in the works. We’re constantly facing practical uncertainty – uncertainty about what we should actually *do*. To figure that out, we need principles to guide our decisions. And for *that*, we really need to get a handle on what uncertainty *is*.
What is Uncertainty, Really?
Now, the usual way folks think about uncertainty is that it’s just the opposite of certainty. Like, if you’re not totally sure, you’re uncertain. Some think of it as a feeling (psychological certainty), others as a property of beliefs (epistemic certainty – like, is this belief absolutely indubitable?). But I’ve been exploring a different angle.
What if uncertainty isn’t just about missing information or a lack of knowledge? What if it’s fundamentally about conflict? I’m proposing that uncertainty is really about a disagreement between the reasons you have for and against different possible mental attitudes. Think about it: if you’re uncertain about something, it’s usually because you have reasons pulling you in different directions. Reasons to believe one thing, and reasons to believe the opposite. Reasons to want one thing, and reasons to want something else.
This applies not just to beliefs (cognitive attitudes, which are supposed to represent reality) but also to things like desires, hopes, or aversions (non-cognitive attitudes, which aren’t about representing reality). You can have conflicting reasons about what’s true, sure, but you can also have conflicting reasons about what you *want*. This is a bit like what some call ambivalence – that feeling of being pulled in opposing directions, not just being indifferent.
So, under this view, you’re uncertain about some attitude if there are mutually exclusive alternatives to it, and you don’t have absolutely conclusive reasons for any single one. The reasons I’m talking about here are the ones *you* see, the ones motivating you, not necessarily some objective, perfect reasons out there. These reasons could be evidence, arguments, feelings, values – anything that counts in favour of or against holding a particular attitude or taking a particular action.
The cool thing about this definition is that it covers both epistemic uncertainty (doubts about beliefs) and ambivalence (doubts about non-cognitive attitudes) under one roof. And the degree of your uncertainty? That depends on how balanced and strong those conflicting reasons are. If the reasons are perfectly balanced and really strong, your uncertainty is going to be pretty severe.
Consider new information. It might give you new reasons, or change the weight of existing ones. This can shift the balance and reduce your uncertainty, maybe even resolve it if the reasons for one alternative become conclusive. But here’s the catch: how much this helps depends on the *nature* of the disagreement between your reasons.

The Nasty Bit: Radical Disagreement
If uncertainty is about disagreeing reasons, then dealing with uncertainty is about dealing with that disagreement. Usually, when people disagree, we think someone’s making a mistake, or they’re missing some information. If we just got all the facts straight, removed any biases, and thought perfectly clearly (under ideal cognitive and epistemic conditions), the disagreement should melt away, right? I’ll call this kind of disagreement amenable.
But what if the disagreement *doesn’t* go away, even under those perfect conditions? What if it persists even when everyone is thinking clearly and has all the relevant information? This is what I’m calling radical disagreement. And if the disagreement underlying your uncertainty is radical, then no amount of extra evidence or logical analysis is going to make it disappear entirely. That’s a pretty significant implication!
So, the big question becomes: when can we have radical disagreement between reasons? Well, it seems pretty plausible for non-cognitive attitudes. Desires, tastes, feelings – these aren’t evaluated based on whether they accurately represent reality. My reason for liking coffee (the smell) and my reason for disliking it (the bitterness) aren’t about truth or falsehood. They’re just conflicting pulls. And even if I think perfectly clearly and have tasted all the coffee in the world, those conflicting reasons might still be there, leading to persistent ambivalence. Disagreement over non-cognitive attitudes can always be radical because it’s not tied to fitting reality.
But could radical disagreement happen with cognitive attitudes too? Maybe. What if the proposition you’re uncertain about doesn’t actually have a truth value? Like, it’s neither true nor false. Then more evidence won’t help settle whether it’s true. Or, even if it *does* have a truth value, what if it’s fundamentally inaccessible to us? If a fact is intrinsically unknowable, no amount of evidence will make it knowable. Disagreement over such propositions could be radical.
Even if we don’t go that far, for the purposes of making a decision *right now*, some propositions might be practically inaccessible. Maybe there are moral facts out there, and maybe they’re knowable in principle, but the philosophical debate is so far from settled that you’re not going to figure it out in time to make your decision. For that specific decision’s horizon, the disagreement over the moral truth can be treated as if it were radical.
The point is, there seem to be cases where disagreement between reasons just doesn’t tend to vanish with better thinking or more facts, even for things we might think are cognitive. And if uncertainty comes from disagreement, then some uncertainty will similarly persist. This doesn’t mean we can’t know anything; radical disagreement might be limited in scope. But it does mean we need to understand the *kind* of uncertainty we’re dealing with.
Mapping the Doubts: A Typology
Since uncertainty stems from disagreement, and disagreement can be radical or amenable, we can start mapping out different types of uncertainty based on the attitude involved (cognitive or non-cognitive) and whether the disagreement can be radical. This isn’t the only way to slice the pie, but it helps illustrate how the disagreement-based view works and connects to other ideas.
- Cognitive Attitudes: These are about representing reality.
- Empirical Uncertainty: Doubts about facts that can be settled with evidence (like tomorrow’s weather). Disagreement here is usually amenable.
- Logical Uncertainty: Doubts about logical truths or contradictions. Settled with logic. Disagreement is amenable.
- Vague Uncertainty: Doubts about borderline cases of vague concepts (“Is Bob old?”). The proposition might be neither true nor false. Disagreement can be radical.
- Ontic Uncertainty: Doubts due to genuine non-determinism in the world (like a quantum event, or maybe free will). The state doesn’t exist yet. Disagreement can be radical because no amount of current evidence can settle a future indeterminate state. (Note: This assumes the world *is* genuinely non-deterministic, which is debated).
- Non-Cognitive Attitudes: These aren’t about representing reality.
- Emotive Uncertainty: Doubts about your own tastes or feelings (“Do I like coffee?”). This is Makins’ ambivalence. Since it’s not about truth, disagreement between reasons (smell vs. bitterness) can be radical.
- Moral Uncertainty: Doubts about what is good or right. This is a bit of a special case because whether moral judgments are cognitive or non-cognitive is debated. But moral disagreements often *feel* radical (the Trolley Problem!). Even if moral facts exist, they might be practically inaccessible within the timeframe of a decision, making the uncertainty effectively radical. This can extend to other normative areas like aesthetics or rationality.

This typology shows how different sources and types of disagreement lead to different kinds of uncertainty. And crucially, it highlights which types we might expect to persist even when we’ve done our best to gather evidence and think clearly.
Beyond the Textbook: Uncertainty in Decisions
Now, how does this play out in decision-making? Standard decision theory, the kind you might see in a textbook, often simplifies things a lot. It usually assumes that all your practical uncertainty (uncertainty about what to do) can be boiled down to uncertainty about the “state of the world” (like, will there be traffic?) and that this uncertainty can be represented by a single probability number.
But I think that view is too narrow. Even if you knew *exactly* what the state of the world was, you might *still* be uncertain about what to do! Uncertainty can creep into other parts of the decision process:
- States: Yes, you can be uncertain about the state of the world (traffic or no traffic). As we saw, this can be empirical, but also potentially logical, vague, or ontic depending on what the state is about. You can also have second-order uncertainty – uncertainty about the probabilities you assign to these states.
- Model: How do you even set up the decision problem? What options should you consider? What possible outcomes matter? What states of the world are relevant? Being uncertain about these things is model uncertainty. Is using a bike an option? Is stealing a bike permissible? These questions can involve cognitive doubts (is it feasible?) but also normative or even emotive ones (is it permissible? Is it too unpleasant?). Uncertainty about what outcomes matter to you is often tied to emotive or normative uncertainty.
- Utility: How do you value the possible outcomes? This isn’t always straightforward. You might be uncertain about the precise value of an outcome, especially if it’s complex or involves things you have conflicting feelings or values about. This uncertainty about evaluation can be empirical or ontic if it relates to factual aspects of the outcome, but it’s often primarily emotive (doubts about your tastes) or normative (doubts about which values or ethical principles apply). What if the decision itself might change what you value (a transformative experience)? That’s a deep form of utility uncertainty.
- Probability: Beyond uncertainty about the state itself, you can be uncertain about the probabilities you’ve assigned. Are these numbers reliable? Are they based on good evidence? This is often cognitive (uncertainty about your own beliefs or the evidence), but could involve normative aspects (who is a reliable expert?).

This shows that the uncertainty we face in decisions is much richer and more varied than just not knowing the state of the world. It can touch every part of the decision process, from defining the problem to evaluating the outcomes and probabilities.
So, What Do We Do?
Given that uncertainty is a pervasive feature of life and decision-making, what’s the takeaway? The obvious impulse is to try and reduce it. And understanding the *nature* of the uncertainty is key to doing that effectively.
If your uncertainty is due to amenable disagreement – like simple empirical uncertainty about a fact – then the path is clear: get more evidence, think harder, remove biases. More information *will* help, at least in principle.
But if your uncertainty stems from radical disagreement – whether it’s vague, ontic, emotive, or certain kinds of moral uncertainty – then just gathering more facts about the world isn’t going to cut it. The disagreement isn’t about missing information; it’s inherent in the reasons themselves, or the nature of the thing you’re uncertain about. You can’t expect evidence to resolve it.
This doesn’t mean you’re stuck! It just means you need different strategies. Maybe it’s about finding ways to live with the uncertainty, making robust decisions that work reasonably well across different possibilities, or using different decision frameworks that don’t rely solely on precise probabilities and utilities. It might involve exploring the structure of the disagreement itself, rather than just trying to find more facts.
My hope is that this view – seeing uncertainty as based on disagreement, some of which can be radical – gives us a more nuanced and useful way to think about the doubts that plague our decisions. It connects the feeling of being unsure to deeper philosophical ideas about reasons, disagreement, and the nature of reality and value. It reminds us that not all uncertainty is created equal, and that’s a crucial insight for navigating the complex choices we face every day.
Source: Springer
