Blake Invitational 1 Kamiak nb aff



Yüklə 1,85 Mb.
səhifə6/62
tarix12.01.2019
ölçüsü1,85 Mb.
#96416
1   2   3   4   5   6   7   8   9   ...   62

Aff Util (w/ Race Adv)


1AC- Round 2

Framework

The standard is maximizing pleasure and minimizing pain:

Revisionary intuitionism is true and proves util



Yudkowsky 8: Eliezer Yudkowsky (research fellow of the Machine Intelligence Research Institute; he also writes Harry Potter fan fiction). “The ‘Intuitions’ Behind ‘Utilitarianism.’” 28 January 2008. LessWrong. http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/

I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet. I used to be very confused about metaethics. After my confusion finally cleared up, I did a postmortem on my previous thoughts. I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless. And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad". Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can. Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level. Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition". He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to. Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed. I see the project of morality as a project of renormalizing intuition. We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles. Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock. Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies. "Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from. Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera. So that is "intuition". However, Gowder did not say what he meant by "utilitarianism". Does utilitarianism say... That right actions are strictly determined by good consequences? That praiseworthy actions depend on justifiable expectations of good consequences? That probabilities of consequences should normatively be discounted by their probability, so that a 50 probability of something bad should weigh exactly half as much in our tradeoffs? That virtuous actions always correspond to maximizing expected utility under some utility function? That two harmful events are worse than one? That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one? That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B? If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy. Now, what are the "intuitions" upon which my "utilitarianism" depends? This is a deepish sort of topic, but I'll take a quick stab at it. First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive". Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence. After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out. This does not quite define moral progress, but it is how we experience moral progress. As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there? (Could that be what Gowder means by saying I'm "utilitarian"?) The question of where a road goes - where it leads - you can answer by traveling the road and finding out. If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner. When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth. You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error). But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place. Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up. After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions. When you've read enough research on scope insensitivity - people will pay only 28 more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing... Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic: Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group. There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact. If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously. So people are willing to pay more to help one child than to help eight. Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune. But what about the billions of other children in the world? Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down? How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417? Or you could look at that and say: "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking." And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities. It's just that the brain doesn't goddamn multiply. Quantities get thrown out the window. If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives. Likewise if such choices are made by 10 different people, rather than the same person. As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives. (It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways. But the long run is a helpful intuition pump, so I am talking about it anyway.) The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time. Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe. Three lives are one and one and one. No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation. And if you add another life you get 4 = 1 + 1 + 1 + 1. That's aggregation. When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn't goddamn multiply. The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words. When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities. Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering". And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole. So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it. Not even "but a thousand dollars isn't worth a 0.0000000001 probability of saving a life". Though the latter choice, of course, is revealed every time we sneeze without calling a doctor. The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect. On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule. But you don't conclude that there are actually two tiers of utility with lexical ordering. You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don't conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority. As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision. Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility. I don't say that morality should always be simple. I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. But that's for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey. When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply." Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply. It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

2. Only consequentialism can explain moral substitutability

Sinnott-Armstrong 92: Sinnott- Armstrong, Walter. “An Argument For Consequentialism” Dartmouth College. Philosophical Perspectives, 6, Ethics, 1992. RW

Since general substitutability works for other kinds of reasons for action, we would need a strong argument to deny that it holds also for moral reasons. If moral reasons obeyed different principles, it would be hard to understand why moral reasons are also called 'reasons' and how moral reasons interact with other reasons when they apply to the same action. Nonetheless, this extension has been denied, so we have to look at moral reasons carefully. I have a moral reason to feed my child tonight, both because I promised my wife to do so, and also because of my special relation to my child along with the fact that she will go hungry if I don't feed her. I can't feed my child tonight without going home soon, and going home soon will enable me to feed her tonight. Therefore, there is a moral reason for me to go home soon. It need not be imprudent or ugly or sacrilegious or illegal for me not to feed her, but the requirements of morality give me a moral reason to feed her. This argument assumes a special case of substitutability: (MS) If there is a moral reason for A to do X, and if A cannot do X without doing Y, and if doing Y will enable A to do X, then there is a moral reason for A to do Y. I will call this 'the principle of moral substitutability', or just 'moral substitutability'.

He continues:

Of course, there are many other versions of deontology. I cannot discuss them all. Nonetheless, these examples suggest that it is the very nature of deontological reasons that makes deontological theories unable to explain moral substitutability. This comes out clearly if we start from the other side and ask which properties create the moral reasons that are derived by moral substitutability. What gives me a moral reason to start the mower is the consequences of starting the mower. Specifically, it has the consequence that I can am able to mow the grass. This reason cannot derive from the same property as my moral reason to mow the lawn unless what gives me a moral reason to mow the lawn is its consequences. Thus, any non-consequentialist moral theory will have to posit two distinct kinds of moral reasons: one for starting the mower and another for mowing the grass. Once these kinds of reasons are separated, we need to understand the connection between them. But this connection cannot be explained by the substantive principles of the theory. That is why all deontological theories must lack the explanatory coherence which is a general test of adequacy for all theories. I conclude that no deontological theory can adequately explain moral substitutability.



He continues:

All other moral reasons are non-consequential. Thus, a moral reason to do an act is non-consequential if and only if the reason depends even partly on some property that the act has independently of its consequences. For example, an act can be a lie regardless of what happens as a result of the lie (since some lies are not believed), and some moral theories claim that that property of being a lie provides a moral reason not to tell a lie regardless of the consequences of this lie. Similarly, the fact that an act fulfills a promise is often seen as a moral reason to do the act, even though the act has that property of fulfilling a promise independently of its consequences. All such moral reasons are non-consequential. In order to avoid so many negations, I will also call them 'deontological'. This distinction would not make sense if we did not restrict the notion of consequences. If I promise to mow the lawn, then one consequence of my mowing might seem to be that my promise is fulfilled. One way to avoid this problem is to specify that the consequences of an act must be distinct from the act itself. My act of fulfilling my promise and my act of mowing are not distinct, because they are done by the same bodily movements.10 Thus, my fulfilling my promise is not a consequence of my mowing. A consequence of an act need not be later in time than the act, since causation can be simultaneous, but the consequence must at least be different from the act. Even with this clarification, it is still hard to classify some moral reasons as consequential or deontological,11 but I will stick to examples that are clear. In accordance with this distinction between kinds of moral reasons, I can now distinguish different kinds of moral theories. I will say that a moral theory is consequentialist if and only if it implies that all basic moral reasons are consequential. A moral theory is then non-consequentialists or deontological if it includes any basic moral reasons which are not consequential. 5. Against Deontology So defined, the class of deontological moral theories is very large and diverse. This makes it hard to say anything in general about it. Nonetheless, I will argue that no deontological moral theory can explain why moral substitutability holds. My argument applies to all deontological theories because it depends only on what is common to them all, namely, the claim that some basic moral reasons are not consequential. Some deontological theories allow very many weighty moral reasons that are consequential, and these theories might be able to explain why moral substitutability holds for some of their moral reasons: the consequential ones. But even these theories cannot explain why moral substitutability holds for all moral reasons, including the non-consequential reasons that make the theory deontological. The failure of deontological moral theories to explain moral substitutability in the very cases that make them deontological is a reason to reject all deontological moral theories. I cannot discuss every deontological moral theory, so I will discuss only a few paradigm examples and show why they cannot explain moral substitut- ability. After this, I will argue that similar problems are bound to arise for all other deontological theories by their very nature. The simplest deontological theory is the pluralistic intuitionism of Prichard and Ross. Ross writes that, when someone promises to do something, 'This we consider obligatory in its own nature, just because it is a fulfillment of a promise, and not because of its consequences.'12 Such deontologists claim in effect that, if I promise to mow the grass, there is a moral reason for me to mow the grass, and this moral reason is constituted by the fact that mowing the grass fulfills my promise. This reason exists regardless of the consequences of mowing the grass, even though it might be overridden by certain bad consequences. However, if this is why I have a moral reason to mow the grass, then, even if I cannot mow the grass without starting my mower, and starting the mower would enable me to mow the grass, it still would not follow that I have any moral reason to start my mower, since I did not promise to start my mower, and starting my mower does not fulfill my promise. Thus, a moral theory cannot explain moral substitutability if it claims that properties like this provide moral reasons.

3. Use moral hedging—we ought to maximize the probability of being morally correct instead of choosing just one theory



Boey 13: Grace Boey “Is applied ethics applicable enough? Acting and hedging under moral uncertainty” 3 Quarks Daily December 16th 2013 http://www.3quarksdaily.com/3quarksdaily/2013/12/is-applied-ethics-applicable-enough-acting-under-moral-uncertainty.html JW

A runaway train trolley is racing towards five men who are tied to the track. By pulling a lever, you could divert the train's path to an alternative track, which has only one man on it … If you're gearing up to respond with what you'd do and why, don't bother. It doesn't matter whether you'd pull the lever: it's too late. The five were run over almost fifty years ago, because philosophers couldn't decide what to do. They have been – pun most certainly intended – debated to death. Formulated by the late Philippa Foot in 1967, the famous "trolley problem" has since been endlessly picked apart and argued over by moral philosophers. It's even been reformulated – apart from "classic", the trolley problem also comes in "fat man", "loop", "transplant" and "hammock" varieties. Yet, in spite of all the fascinating analysis, there still isn't any good consensus on what the right thing is to do. And, not only do philosophers disagree over what to do, a significant number of them just aren't sure. In a 2009 survey of mostly professional philosophers, 34.8 of the respondents indicated some degree of uncertainty over the right answer 1. Philosopher or not, if you're in the habit of being intellectually honest, then there's a good chance you aren't completely certain about all your moral beliefs. Looking to the ethics textbooks doesn't help – you'd be lucky not to come away from that with more doubts than before. If the philosophical field of ethics is supposed to resolve our moral dilemmas, then on some level it has obviously failed. Debates over moral issues like abortion, animal rights and euthanasia rage on, between opposing parties and also within the minds of individuals. These uncertainties won't go away any time soon. Once we recognize this, then the following question naturally arises: what's the best way to act under moral uncertainty? Ethicists, strangely, have mostly overlooked this question. But in relatively recent years, a small roup of philosophers have begun rigorous attempts at addressing the problem. In particular, attempts are being made to adapt probability and expected utility theory to decision-making under moral uncertainty. The nature of moral uncertainty Before diving into such theories, it's useful to distinguish moral uncertainty from non-moral uncertainty. Non-philosophers often get impatient with philosophical thought experiments, because they depict hypothetical situations that don't do justice to the complexities of everyday life. One real-life feature that thought experiments often omit is uncertainty over facts about the situation. The trolley problem, for example, stipulates that five people on the original track will definitely die if you don't pull the lever, and that one person on the alternative track will definitely die if you do. It also stipulates that all the six lives at stake are equally valuable. But how often are we so certain about the outcomes of our actions? Perhaps there's a chance that the five people might escape, or that the one person might do the same. And are all six individuals equally virtuous? Are any of them terminally ill? Naturally, such possibilities would impact our attitude towards pulling the lever or not. Such concerns are often brought up by first-time respondents to the problem, and must be clarified before the question gets answered proper. Lots has been written about moral decision-making under factual uncertainty. Michael Zimmerman, for example, has written an excellent book on how such ignorance impacts morality. The point of most ethical thought experiments, though, is to eliminate precisely this sort of uncertainty. Ethicists are interested in finding out things like whether, once we know all the facts of the situation, and all other things being equal, it's okay to engage in certain actions. If we're still not sure of the rightness or wrongness of such actions, or of underlying moral theories themselves, then we experience moral uncertainty. As the 2009 survey indicates, many professional philosophers still face such fundamental indecision. The trolley problem – especially the fat man variant – is used to test our fundamental moral commitment to deontology or consequentialism. I'm pretty sure I'd never push a fat bystander off a bridge onto a train track in order to save five people, but what if a million people and my mother were at stake? Should I torture an innocent person for one hour if I knew it would save the population of China? Even though I'd like to think of myself as pretty committed to human rights, the truth is that I simply don't know. Moral hedging and its problems So, what's the best thing to do when we're faced with moral uncertainty? Unless one thinks that anything goes once uncertainty enters the picture, then doing nothing by default is not a good strategy. As the trolley case demonstrates, inaction often has major consequences. Failure to act also comes with moral ramifications: Peter Singer famously argued that inaction is clearly immoral in many circumstances, such as refusing to save a child drowning in a shallow pond. It's also not plausible to deliberate until we are completely morally certain – by the time we're done deliberating, it's often too late. Suppose I'm faced with the choice of saving one baby on a quickly-sinking raft, and saving an elderly couple on a quickly-sinking canoe. If I take too long to convince myself of the right decision, all three will drown. In relatively recent years, some philosophers have proposed a ‘moral hedging' strategy that borrows from expected utility theory. Ted Lockhart, professor of philosophy at Michigan Technological University, arguably kicked off the conversation in 2000 with his book Moral Uncertainty and its Consequences. Lockhart considers the following scenario: Gary must choose between two alternatives, x and y. If Gary were certain that T1 is the true moral theory, then he would be 70 sure that x would be morally right in that situation and y would be morally wrong and 30 that the opposite is true. If Gary were sure that T2 is the true theory, then he would be 100 certain that y would be morally right for him to do and that x would be morally wrong. However, Gary believes there is a .6 probability that T1 is the true theory and a .4 probability that T2 is the true theory. (p.42) There are at least two ways that Gary could make his decision. First, Gary might pick the theory he has the most credence in. Following such an approach, Gary should stick to T1, and choose to do x. But Lockhart thinks that this 'my-favourite-theory' approach is mistaken. Instead, Lockhart argues that it is more rational to maximize the probability of being morally right. Following this, the probability that x would be morally right is .42 and the probability that y would be morally right is .58. Under this approach, Gary should choose y. This seems reasonable so far, but it isn't the end of the story. Consider the following scenario described by Andrew Sepielli (professor of philosophy at University of Toronto, who has written extensively about moral uncertainty and hedging over the past few years): Suppose my credence is .51 that, once we tote up all the moral, prudential, and other reasons, it is better to kill animals for food than not, and .49 that it is better not to kill animals for food. But suppose I believe that, if killing animals is better, it is only slightly better; I also believe that, if killing animals is worse, it is substantially worse – tantamount to murder, even. Then it seems … that I have most subjective reason not to kill animals for food. The small gains to be realized if the first hypothesis is right do not compensate for the significant chance that, if you kill animals for food, you are doing something normatively equivalent to murder. Both Lockhart and Sepielli agree that it isn't enough for us to maximize the probability of being morally right. The value of outcomes under each theory should be factored into our decision-making process as well. We should aim to maximize some kind of ‘expected value', where the expected value of an outcome is the probability of its being right, multiplied by the value of its being right if it is indeed right. Lockhart's specific strategy is to maximize the ‘expected degree of moral rightness', but the broad umbrella of strategies that follows the approach can be called ‘moral hedging'. Moral hedging seems like a promising strategy, but it's plagued by some substantial problems. The biggest issue is what Sepielli calls the ‘Problem of Intertheoretic Comparisons' (PIC). How are we supposed to compare values across moral theories that disagree with each other? What perspective should we adopt while viewing these ‘moral scales' side by side? The idea of intertheoretic comparison is at least intuitively intelligible, but on closer inspection, values from different moral theories seem fundamentally incommensurable. Given that different theories with different values are involved, how could it be otherwise? Lockhart proposes what he calls the ‘Principle of Equity among Moral Theories' (PEMT), which states that maximum and minimum degrees of rightness should be fixed at the same level across theories, at least for decision-making purposes. But Sepielli points out that PEMT seems, amongst other things, arbitrary and ad hoc. Instead, he proposes that we use existing beliefs about 'cardinal ranking' of values to make the comparison. However, this method is open to its own objections, and also depends heavily on facts about practical psychology, which are themselves messy and have yet to be worked out. Whatever the case, there isn't any consensus on how to solve the problem of intertheoretic comparisons. PIC has serious consequences – if the problem turns out to be insurmountable, moral hedging will be impossible. This lack of consensus relates to another problem for moral hedging, and indeed for moral uncertainty in general. In addition to being uncertain about morality, we can also be uncertain about the best way to resolve moral uncertainty. Following that, we can be uncertain about the best way to resolve being uncertain about the best way to resolve moral uncertainty … and so on. How should we resolve this seemingly infinite regress of moral uncertainty? One last and related question is whether, practically speaking, calculated moral hedging is a plausible strategy for the average person. Human beings, or at least most of them, aren't able to pinpoint the precise degree to which they believe that things are true. Perhaps they are uncertain about their own probabilistic beliefs (or perhaps it doesn't even makse sense to say something like "I have a .51 credence that killing animals for food is wrong"). Additionally, surely the average person can't be expected to perform complex mathematical calculations every time she's faced with uncertainty. If moral hedging is too onerous, it loses it edge over simply deliberating over the moral theories themselves. Moral hedging must accomodate human limits if it is to be applicable. Moving ahead with moral uncertainty It's clear that there's much more work to be done on theories of moral hedging, but the idea seems promising. Hopefully, a good method for inter-theoretic value comparison will be developed in time. It's also good that these philosophers have brought the worthwhile question of moral uncertainty to life, and perhaps strong alternatives to moral hedging will subsequently emerge. Maybe philosophers will never agree, and textbooks on moral uncertainty will join the inconclusive ranks of textbooks on normative ethics. But we can hardly call such textbooks useless: they show us the basis (or non-basis) of our initial intuitions, and at least give us the resources to make more considered judgments about our moral decisions. Over the years, humanity has benefitted enormously from the rigorous thinking done by ethicists. In the same way, we stand to benefit from thinking about what to do under moral certainty. It's also clear that theories have to accommodate, and be realistic about, human psychology and our mental capacities to apply such strategies. Any theory that doesn't do this would be missing the point. If these theories aren't practically applicable, then we may as well not have formulated them. The practical circumstances under which we encounter moral uncertainty also differ – in some situations, we're under more psychological stress and time constraints than others. This raises the interesting question of whether optimal strategies might differ for agents under different conditions. Something else to consider is this: in what way, if any, are we required to cope with moral uncertainty in ways such as moral hedging? Theorists like Lockhart and Sepielli argue for the most ‘rational' thing to do under uncertain circumstances, but what if one doesn't care for being rational? Might we still be somehow morally obliged to adopt some strategy? Perhaps there is some kind of moral duty to act rationally, or 'responsibly', under moral uncertainty. It does seem, in a somewhat tautological sense, that we're morally required to be moral. Perhaps this involves being morally required to ‘try our best' to be moral, and not to engage in overly 'morally risky' behaviour. Other than Lockhart and Sepielli, others who have engaged the issue of moral uncertainty – at least tangentially – include James Hudson (who rejected moral hedging), Graham Oddie, Jacob Ross, Alexander Guerrero, Toby Ord and Nick Bostrom. Hopefully, in time, more philosophers (and non-philosophers too) will join in this fledgling debate. The discussion and its findings will be a tribute to all the victims of ethical thought experiments everywhere – may we someday stop killing hypothetical people with analysis paralysis.

ROB


Yüklə 1,85 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   ...   62




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin