Utilitarianism: "A moral theory according to which an action is right if and only if it conforms to the principle of utility."
Utility: "that property in any object, whereby it tends to produce benefit, advantage, pleasure, good, or happiness...or...to prevent the happening of mischief, pain, evil, or unhappiness"
Is that enough of a definition for the goals of utilitarianism?
Utilitarianism kicks ass.
Moderator: Moderators
-
- Duke
- Posts: 1730
- Joined: Tue Jan 06, 2009 7:18 pm
The happiness/dopamine portion of my post was just an example of what people might choose. I have to admit that I don't really see what your gripe is about regarding the how and why. It seems like you're arguing that there needs to be some objective morality that Utilitarianism needs to be able to access or align itself with in order to define something as good or not-good. Are you arguing from a position of the present? It seems like you are with your "almost no one" comment regarding the dopamine example. If so, why? Are you trying to map a path for how the U.S. would become a Utilitarian society?Gelare wrote: But that's a terrible answer. You're telling me that what makes Utilitarianism right isn't that it's a solid moral philosophy of any kind, what makes it right is that we all agree on it, which, indeed, is a perfectly respectable position to hold if you're a Libertarian, and respecting human autonomy and choice is of central importance to you, but not if you're a Utilitarian. And since, contrary to what you say, not everyone (indeed, almost no one) has decided that maximizing dopamine levels in the brain is the moral thing to do, your morality by consensus theory actually means you're not just wrong but also bad for not going with the majority on this one. Try again.
If you're going to reject any definition of good, or happiness, or suffering--as well as any metric for determining such--then the only thing you're really left with is consensus, and that seems good enough. (At least until evidence and testing shows that it isn't.)
Are you concerned that would lead to a situation where 90% of the population decides that it's better off without the other 10% and, thus, their elimination is a moral action? If that's the case, would Negative Utilitarianism, with it's focus on preventing harm and suffering, sit better with you?
Are you concerned that Utilitarianism is a fluid morality, where actions change status over time? That what is taboo today might be shown to be acceptible tomorrow?
VitM, I'm basically pressing utilitarians on this point, which Frank rightly said was one of the harder things for utilitarianism to deal with. If you don't tell me what is the good that is supposed to be maximized, you're basically giving me a car with no engine - pretty and completely, entirely useless. Utilitarianism is supposed to be a moral theory, and if what you're going to do is leave out the discussion of what's good and what's bad and why - if you're leaving out the parts that make it moral - you're left with basically nothing.FrankTrollman wrote:The biggest weakness of the theory is the definition of "Good" in the first place - which is necessarily up for debate. You can make a perfectly good strawman around "Grapefruit Utilitarianism" where you claim that the highest good outcome possible is the creation and storage of as many grapefruits as possible, irregardless of all other factors. That's a tough line of argument for the Utilitarian to deal with, because Ethical Calculus is not settled, and there is no obvious way to settle it. Thus, Grapefruit Utilitarianism could be "right." Absence of Evidence is not Evidence of Absence and all that.
-Username17
-
- Serious Badass
- Posts: 29894
- Joined: Fri Mar 07, 2008 7:54 pm
In general, Utilitarian views value the hierarchy of needs, and how much they value things on different tiers of the pyramid varies from person to person:

Naturally, courses of action will frequently trade some number of people filling their psychological needs for some other number of people failing to complete their safety needs and then Utilitarians argue about whether such a result is good or bad. "What is good" is not especially at odds, but how to score good and bad results is totally up for grabs.
-Username17
Naturally, courses of action will frequently trade some number of people filling their psychological needs for some other number of people failing to complete their safety needs and then Utilitarians argue about whether such a result is good or bad. "What is good" is not especially at odds, but how to score good and bad results is totally up for grabs.
-Username17
- Gnosticism Is A Hoot
- Knight
- Posts: 322
- Joined: Mon Mar 23, 2009 12:09 pm
- Location: Supramundia
Bingo. This is why almost all consequentialists are utilitarians.FrankTrollman wrote:In general, Utilitarian views value the hierarchy of needs, and how much they value things on different tiers of the pyramid varies from person to person:
Naturally, courses of action will frequently trade some number of people filling their psychological needs for some other number of people failing to complete their safety needs and then Utilitarians argue about whether such a result is good or bad. "What is good" is not especially at odds, but how to score good and bad results is totally up for grabs.
-Username17
Consequentialism requires an account of the good. Utilitarianism says that the good is 'happiness', and happiness can be empirically investigated. There's an awful lot of wrangling in philosophical circles about what 'really constitutes' happiness or wellbeing, and in the meantime the psychologists have just gone ahead and found out.
The soul is the prison of the body.
- Michel Foucault, Discipline & Punish
- Michel Foucault, Discipline & Punish
I think it's interesting that Frank praises modern utilitarianism almost exclusively for the trait whose absence from utilitarianism most pissed me off when I was reading John Stuart Mill's book of the same name in ethnics class. The basic premise of consequentialism--at least as we covered it--was that actions are good or bad based on whether their actual consequences are good or bad, not based on whether their intended consequences are good or bad, so the most simple and direct attempt to apply the philosophy is both impossible and repulsive. Fortunately, consequentialists became a lot more sophisticated and practical and, while continuing to use maximizing actual good as the theoretical guiding goal, have decided that actual action and moral praiseworthiness should be judged based on our current best guess instead of the actual results through the end of time.
I'd certainly agree that attempting to achieve good results is morally praiseworthy, though I'm not sure I'd agree that's the entire picture.
Here's a highly esoteric objection to utilitarianism that I don't think is widely known (and I offer it purely for entertainment value):
Consequentialism says that actions can (in principle) be morally ranked in order of the sum total good in the universe (til the end of time) that will result if they are taken. However, it is conceivable (at least until we discover an awful lot of physics) that the universe is infinite in at least one dimension (time, for example), and therefore consequentialism directs us to take infinite sums. Mathematically, infinite sums are not guaranteed to be comparable; it might not be possible to say whether the sum of good resulting from one action is greater, lesser, or equal to the sum of good resulting from another action (even with perfect knowledge). Therefore, depending on the physical nature of the universe in which we find ourselves, the moral ranking by consequentialism might not even be coherent, let alone correct.
Therefore, any argument that consequentialist morality is "accurate" must necessarily logically imply that the universe is finite, or that the nature of good and the nature of the universe are somehow aligned such that all of the infinite sums will be totally ordered.
Is anyone prepared to claim with a straight face that they have an argument in favor of utilitarianism that also reveals profounds truths about the physical structure of the universe?
Or if you see a flaw in my reasoning, that'd be interesting, too. I haven't taken the utmost care in constructing this argument.
I'd certainly agree that attempting to achieve good results is morally praiseworthy, though I'm not sure I'd agree that's the entire picture.
Here's a highly esoteric objection to utilitarianism that I don't think is widely known (and I offer it purely for entertainment value):
Consequentialism says that actions can (in principle) be morally ranked in order of the sum total good in the universe (til the end of time) that will result if they are taken. However, it is conceivable (at least until we discover an awful lot of physics) that the universe is infinite in at least one dimension (time, for example), and therefore consequentialism directs us to take infinite sums. Mathematically, infinite sums are not guaranteed to be comparable; it might not be possible to say whether the sum of good resulting from one action is greater, lesser, or equal to the sum of good resulting from another action (even with perfect knowledge). Therefore, depending on the physical nature of the universe in which we find ourselves, the moral ranking by consequentialism might not even be coherent, let alone correct.
Therefore, any argument that consequentialist morality is "accurate" must necessarily logically imply that the universe is finite, or that the nature of good and the nature of the universe are somehow aligned such that all of the infinite sums will be totally ordered.
Is anyone prepared to claim with a straight face that they have an argument in favor of utilitarianism that also reveals profounds truths about the physical structure of the universe?
Or if you see a flaw in my reasoning, that'd be interesting, too. I haven't taken the utmost care in constructing this argument.
Our current best guess says that entropy will eventually increase to the point that the universe will be incompatible with human existence. There is actually a finite end to time.
Also, the world is and has always been in flux enough that forecasting more than a few centuries out is hard enough that you might as well not do it, at least if your primary effects will be social. Jefferson, for example, couldn't have known or predicted anything about the 20th century, so praising him for winning World War 2 would be silly. Physical or chemical effects can be a lot more persistent, though; we can, right now, predict that pumping CO2 into the air is going to do something to our climate in the long run, and if it happens to fast it'll be bad for us.
Also, the world is and has always been in flux enough that forecasting more than a few centuries out is hard enough that you might as well not do it, at least if your primary effects will be social. Jefferson, for example, couldn't have known or predicted anything about the 20th century, so praising him for winning World War 2 would be silly. Physical or chemical effects can be a lot more persistent, though; we can, right now, predict that pumping CO2 into the air is going to do something to our climate in the long run, and if it happens to fast it'll be bad for us.
"No, you can't burn the inn down. It's made of solid fire."
-
- Invincible Overlord
- Posts: 10555
- Joined: Thu Sep 25, 2008 3:00 am
You're getting the cart before the horse there. Utilitarians agree that a moral system (including their own) should be based on some form of democracy--which definitionally requires at least a majority of people to agree on it--because, get this, the people agree with a course the happier they all will be.Gelare wrote: You're telling me that what makes Utilitarianism right isn't that it's a solid moral philosophy of any kind, what makes it right is that we all agree on it,
Yes, you're going to have a few holdouts out there who like things like dictatorships and aristocracies more and are going to be unhappier in a democracy. But it's a proven fact that democracies and democratic agreement of rules increase overall happiness.
Josh Kablack wrote:Your freedom to make rulings up on the fly is in direct conflict with my freedom to interact with an internally consistent narrative. Your freedom to run/play a game without needing to understand a complex rule system is in direct conflict with my freedom to play a character whose abilities and flaws function as I intended within that ruleset. Your freedom to add and change rules in the middle of the game is in direct conflict with my ability to understand that rules system before I decided whether or not to join your game.
In short, your entire post is dismissive of not merely my intelligence, but my agency. And I don't mean agency as a player within one of your games, I mean my agency as a person. You do not want me to be informed when I make the fundamental decisions of deciding whether to join your game or buying your rules system.