Incommensurability and Hardness, Chrisoula Andreou
There is growing support for the view that there can be cases of incommensurability, understood as cases in which two alternatives, X and Y, are such that X is not a better option than Y, Y is not a better option than X, and X and Y are not equally good options. Such cases seem possible even if the value of an option invariably depends on the agent’s concerns. If options can be incommensurable, then incomplete preferences are to be expected. This paper assumes that alternatives can be incommensurable and explores the following prominent idea: Insofar as choice situations that agents face qua rational agents involve options that are neither one better than the other nor equally good, the choice situations are, due to this structural feature, distinctively hard. The reasoning in this paper suggests that this position is mistaken, not because there is no real challenge here, but because, insofar as a challenge has been identified, it is one that can make cases involving commensurable alternatives hard too. The paper’s reasoning also suggests that the challenge at issue can sometimes, even if not always, be overcome via effective choice over time.
Partial comparability, Ralf Bader
This paper considers cases of partial comparability where various dimensions of evaluation are taken to be not fully but only partially comparable such that some but not all gains and losses on these dimensions can be traded off against each other. It will be argued that these cases are to be given an epistemic interpretation such that objectively there is complete comparability that is only partially epistemically accessible to us.
The Continuum Argument is Invalid, John Broome
Derek Parfit argues by means of something he calls a ‘continuum argument’ that a particular appealing premise in population axiology implies a conclusion that he and many other people find repugnant. He treats this as a paradox, and takes up the challenge of resolving it, looking for a way to avoid this Repugnant Conclusion. The solution he offers depends on the existence of imprecision within the relation of betterness among populations of people. Other philosophers have taken up the same challenge, following Parfit’s lead, and offered solutions also based on imprecision or incommensurability.
I shall show that actually the Repugnant Conclusion is not implied by Parfit’s appealing premise. The continuum argument is invalid. There is therefore no paradox and no real challenge. Moreover, the explanation of why this is so has nothing to do with imprecision, incompleteness, incommensurability, indeterminacy or vagueness in betterness. It is consistent with a sharp, complete betterness ordering.
Value magnitudes and incomparability, Krister Bykvist
Recently, there has been a (very) small revival in taking value magnitudes seriously. Values have been accepted as abstract entities in their own right rather than just equivalence classes of equally good items. As has been shown by myself, Jake Nebel and Brian Hedden, this value magnitude realism has many virtues. For example, it can (a) easily explain cross- time, cross-world, and inter-theoretical comparisons of value, (b) define goodness, badness, and neutrality without falling into the pitfalls of standard definitions, (c) provide qualitative versions of measurement axioms that seem easier to satisfy, and (d) provide qualitative versions of the axioms of social choice that enables us clarify the role of invariance conditions and to escape some central impossibility theorems.
However, since in general all magnitudes of the same kind are assumed to be comparable – e.g., one weight is either greater, lesser or the same as another weight – value magnitude realism seems to be committed to full comparability of values of the same kind. This would rule out intuitive value judgments. We can no longer claim that a legal career is neither better than, worse than, nor equally as good as a music career, and Mozart cannot be said to be neither better than, worse than, nor equally as good as Michelangelo, assuming that all of these items have value.
In my talk, I am going to explore the prospects of denying value comparability while accepting value magnitude realism. I shall argue that the prospects look dim unless one identifies overall value with vectors or distributions of value dimensions (as is done by John Nolt, Justin D’Ambrosio and Brian Hedden). However, it turns out to be difficult to find a way of understanding the nature and structure of these dimensions without falling prey to objections. Even if full comparability cannot be avoided, some comfort can be found in the fact that value magnitude realism can still make sense of value ambivalence and reasonable value disagreements.
On Judging, Courtney M. Cox
The intellectually honest judge faces a very serious problem about which little has been said: What should she do when she knows all the relevant facts, laws, and theories of adjudication, but still remains uncertain about what she ought to do? Such occasions will arise, for whatever her preferred theory about how she ought to decide a given case—what I will call her preferred “jurisprudence”—she may harbor lingering doubts that a competing jurisprudence is correct instead. Sometimes, these competing jurisprudences provide conflicting guidance. When that happens, what should she do?
Drawing on emerging debates in moral theory (e.g., MacAskill, Bykvist, & Ord 2020), I have previously modeled the judge’s problem as uncertainty between highly fragile, but complete, jurisprudences (Cox 2023). In this talk, I will return to my earlier modeling choices. There are numerous differences between the moral case and the legal one that inform such choices. Perhaps, the most important one is this: A merely moral agent does not generally create binding precedent for how he ought act in future, let alone establish a rule of action for others. By contrast, a judge does make rules for other—and not just by her decisions, but by the explanations she elects to provide for them. This and other differences have important implications.
Taking the Cake? Rational Choice in the Face of Unresolved Value Conflict, Ryan Doody
This paper addresses a puzzle about how to choose between options when you don’t endorse a single precise way of evaluating them. It considers two different decision rules which disagree in such contexts—one from Amartya Sen and the other from Isaac Levi. Sen’s rule is more permissive than Levi’s. It says, roughly, that it’s permissible to take an option just so long as there is nothing else on the menu you prefer to it. On the other hand, Levi’s rule says, roughly, that an option is permissible only if there’s some reasonable way of evaluating your options, compatible with your current values, according to which that option would be best.
The paper argues that Levi’s best argument against Sen’s permissive rule fails. It then goes on to explore whether this argument can be rescued, and argues that it can be—by making an unconventional (but interesting) proposal about the relationship between deliberation and rational choice: roughly, that rational choices should issue only from rationally permissible ways of making them. Rational choices are (at least, in principle) endorsable. But, as I will argue, Sen’s rule violates this constraint while Levi’s does not.
Symmetry, Invariance, and Imprecise Probability Zachary Goodsell and Jacob Nebel
It is tempting to think that a process of choosing a point at random from the surface of a sphere can be probabilistically symmetric, in the sense that any two regions of the sphere which differ by a rotation are equally likely to include the chosen point. Isaacs, Hájek, and Hawthorne (2022) argue from such symmetry principles and the mathematical paradoxes of measure to the existence of imprecise chances and the rationality of imprecise credences. Williamson (2007) has also argued from a related symmetry principle to the failure of probabilistic regularity. We contend that these arguments fail, because they rely on auxiliary assumptions about probability which are inconsistent with symmetry to begin with. We argue, moreover, that symmetry should be rejected in light of this inconsistency, and because it has implausible decision-theoretic implications.
The weaker principle of probabilistic invariance says that the probabilistic comparison of any two regions is unchanged by rotations of the sphere. This principle supports a more compelling argument for imprecise probability. We show, however, that invariance is incompatible with mundane judgments about what is probable. Ultimately, we find reason to be suspicious of the application of principles like symmetry and invariance to nonmeasurable regions.
Two Forcing Money Pumps against Incompleteness, Johan Gustafsson
In this talk, I will present two separate money pumps against incompleteness. The first is deontic; it relies on assumptions about what it is rationally permissible to choose. The second is behavioural; it relies on assumptions about how the agent behaves in choices where their preferences over the options are incomplete. Both money pumps are forcing — that is, the agent is rationally required to go along with each step of the exploitation scheme.
Controlling What You Should Do in an Infinite World, Caspar Hare
I will argue that, when incommensurable goods are at stake, you can find yourself in a situation ls like this: You have just two options. You have no reason to take either option.You know of each option that if you learn more about what will happen if you take it then you will gain decisive reasons to take it. Furthermore I will argue that, if a certain cosmological hypothesis is true, then you are routinely in situations like this.
Values as Vectors, Daniel Muñoz
Often, two things seem tied in value, yet slightly improving one would not break the tie. How can we model such “insensitivity to sweetening?” A leading answer is that overall values, rather than being like precise numbers, must be imprecise. I argue that imprecise values are both inadequate and unnecessary for modeling sweetening. The key is not imprecision but multidimensionality. I illustrate this point with a model of values as vectors of (precise) real numbers, then show how to pare back the model’s more controversial aspects. The result is a fresh and flexible framework for the stranger side of ethics (a souped-up “dimensionalism”)—along with some morals about transitivity and an elegant definition of parity.
Person-affecting Restriction and Incommensurable Lives, Wlodek Rabinowicz
Nebel (2018) argues that, in the presence of incommensurabilities between lives, welfarists should give up the Person-Affecting Restriction (PAR). They should accept that, even if the population is held fixed, an outcome might be better than another outcome and yet not be better anyone. That PAR is problematic when applied to variable-population comparisons is well-known. But that it is problematic even if the population is fixed is a novel and striking observation. Nebel’s argument takes its departure from a problem posed by Hare (2010). I call it a problem of crosswise sweetening. Hare originally posed it as a quandary for rational choice for agents with incomplete preferences. Nebel finds another application for crosswise sweetening – in population axiology. Nebel’s argument against PAR is, I think, basically correct, but it is not fully compelling as it stands: It requires further support. This is what I will attempt to do in my talk, relying on the fitting-attitude account of value relations and using an approach similar to the one I have developed in “Incommensurability meets Risk” (2021). That paper targets yet another axiological application of crosswise sweetening, one that is closer to Hare’s original problem.
Appeals to Imprecision in Justifying Social Decisions, Katie Steele
There has arguably been a reaction against having too much (or false) precision in the probabilities and utilities when modelling important social decisions (such as decisions about the social cost of carbon; see, for instance, Stern et al. 2022). But if we seek the strongest justification for any given social choice, then we should also be concerned about leaning too far the other way – that is, having too much(or false) imprecision in the relevant probabilities and/or utilities. I initially articulate this line of argument regarding the justification of social decisions. I subsequently examine whether a different, coherence-based line of argument against overstating imprecision in justifying social decisions can be made, by appeal to Harsanyi’s (1955) theorem (or to generalisations thereof). However, this theorem – if interpreted as requiring perfect conformity between social and individual probabilistic beliefs – may be regarded an `impossibility theorem’. I investigate whether the adjusted theorems of Danan et al. (2016) can be interpreted as constraining the extent of imprecision in social decision making, without going so far as to require the impossible.