I find it a bit paradoxical that in ethics, the results of normative theories are most often compared to what our common intuitions believe. The basic idea is that normative theories (like utilitarianism) can be judged as to how consistent they are with intuitions. If they go against an obvious intuition (like killing is morally wrong), then the theory is obviously wrong. If the theory can clearly coincide with our intuitions, then it might be helpful in helping us decide between conflicting intuitions.
The problem I’m thinking of is a bit abstract, but hopefully it’ll come across clearly. When normative ethical theories determine what makes an action moral, they are almost always using some objective concept. Utilitarianism has an objective consequence: aggregate happiness. Deontology is almost a complete rejection of intuition; it claims that morality is entirely a product of reason. The common theme goes along with the intuition that morality shouldn’t be subjectively decided, so I cannot claim stealing is okay just because I want it to be. There has to be some objective concept out there that my subjective opinion does not change.
However, morality in some way has to be based in our intuitions. The only way morality could be purely objective is if there were a higher power (like God), that clearly established what morality is. As most of you now, I don’t buy that possibility. Alternatively, I see morality as a social contract of sorts that began with initial groups of humans agreeing to not kill each other for mutual benefit.
Here’s the issue: in order for any sort of objective morality to be established, we have to already have an intuition that its base principle is correct. Utilitarianism is an appeal to objectivity, but it only stands if you have the intuition that happiness is the primary factor in morality. Ethical theories that appeal to reason will always have some intuition that they are based on. After all, reason cannot create objective facts; it can only derive objective facts from other objective facts. Reason is a sort of logical bridge that can connect ideas, not create them.
In morality, reason cannot create objective facts without being derived from objective facts. However, morality in itself does not begin with objective fact since it is a human made concept. In order to connect to objective rules about morality, ethical systems must have some other base facts. These base facts, I believe, are necessarily intuitive ideas.
So, which ideas do we in fact base morality on? My own theory necessitates the idea that humans value other humans as an end in themselves. So basically, my ethical theory only works with people who already have the intuition that other humans have a value that isn’t dependent on our own benefit. I personally believe my theory can then use reason to connect this intuition with my final theory.
The main question of this post: can that intuitive principle be objective? If morality cannot be subjectively determined, and morality is determined from that intuition, then that intuition cannot be subjective.
I have two possible answers to this: either we might have evolved to have that intuition (and its not much of a conscious choice), or the absence of that intuition rules out morality altogether.
If we evolved to have the intuition that others are valuable for their own sake, then perhaps morality can still be objective. The intuition it relies on isn’t subjectively chosen, rather, it is a part of what it means to be human. Since we evolved with it, someone without it might be “less human”, perhaps to the same extent of someone who didn’t evolve the ability to understand language. If being human requires this intuition, then the intuition could be an objective standard for a morality that only concerns human decisions.
Another approach, that I particularly find appealing, is that a person cannot be moral without the intuition. So, if a person is an egoist (acts only in self interest), then it is impossible for that person to act morally. I talked about this in my “Inferring an Ought from an Is” post: http://fensel.net/2011/09/17/inferring-an-ought-from-an-is/. I think it is reasonable to say: if you don’t value others as an end in themselves, then you cannot truly act morally.
-This post was delayed a bit because I had two papers due yesterday that I worked on all week. I’m still updating weekly though.
(Semi-spoiler alert, though if you haven’t seen the Dark Knight you really should go see it now)
In the Dark Knight, the Joker does a social experiment of sorts. He holds two ships hostage, one full of regular civilians and the other full of prison inmates. He tells them over an intercom that both ships are rigged with explosives, and that they will both explode at midnight. The twist is that both ships have a trigger to the other ships bombs. If either ship pulls the trigger, the other ship will explode (killing everyone on board) but the ship that does will be saved.
In the movie, both ships heroically decide that they won’t push the button, and wait it out til midnight. Batman, whilst fighting the Joker, notes that neither ship blew the other up and called it a triumph of the human spirit. Batman then stopped the Joker from killing both ships and everyone was saved.
The subtle lesson/moral idea is that it is not okay to sacrifice some to save many/the greater good. The movie praises the citizens/inmates for refusing to press the button, then “rewards” them by having no one die.
The problem: that ethical lesson isn’t correctly taught, as the actual outcome differed from the given circumstances. In the original circumstance, here’s what the options were for either ship:
1. Do nothing, and die with the other ship or die and the other ship survives.
2. Press the trigger, the other ship dies but everyone on this ship survives.
The movie clearly supported option #1, siding with a sort of deontological theory of ethics that prohibits using others deaths in any situation, regardless of the consequences. Consequentionalists, like myself, would choose option #2. What the movie unfairly does, though, is not go through with the given circumstances: neither ship is blown up even though the time ran out.
Imagine that, instead of the movie ending, the Joker managed to fulfill his threat and both ships exploded, killing everybody. To actually promote option #1 (deontological ethics of some sort), the decision to do nothing should still be the morally right one, even though everyone died.
Is that correct? I obviously disagree; it is far worse for both ships of people to die then it is for one to kill the other.
The most common objection: In the real world, people don’t have guarantees in any situation. Maybe both ships should have waited on the chance that both were saved?
Rebuttal: In hypothetical cases, probabilities can be ignored by putting guarantees in the case (done so the ethical questions can be focused on instead of guessing at the results). So, the question is: if both ships deaths are guaranteed to happen without a remote possibility of being saved, should one ship press the trigger?
In real cases: there are no guarantees really in real cases, and very few will be so straightforward as most of these actual situations are caused by accidents and not psychopathic genius clowns. In the real world though, probabilities have to be taken on a case by case basis. What doesn’t change though, unless you believe both ships dying is better than one, is the ethical idea that consequences can justify otherwise immoral actions.
-There’s a famous thought experiment that roughly deals with this, though I can’t for the life of me remember the name/find the author. Basically, you are a traveler in a foreign country when you get taken hostage by a small army. The army is also holding 20 innocent villagers hostage. The leader, being both sadistic and playful, tells you that you have to shoot and kill one of the innocent villagers. If you don’t, he will kill all of them himself. What should you do?
I’m gonna look at some ethical dilemmas in my next post and try to outline what the “right” answer is to each. I’ll either do all of the ones listed here: http://listverse.com/2007/10/21/top-10-moral-dilemmas/ or a few famous ones.
Moral skepticism is the view that we cannot know moral truths whether or not they actually exist. Commonly, it’s the view that morality is either subjectively determined (therefore arbitrary) or that morality is just determined by societies (a form of cultural relativism that does not accept that morality should be determined by societies). Common arguments for moral skepticism generally fall under three forms:
1. The argument from disagreement-People disagree all the time on what morality is, and the disagreement is a sign that people just determine what they believe to be moral.
2. The demand for morality to be justly established- Who has the right to determine what “morality” is? Why should anyone accept another person’s view of morality?
3. The “ought from an is” argument- this argument basically says that even if moral rules were agreed upon, you couldn’t make the jump to “everyone ought to act this way”. (I go into a bit of detail in my last post, hopefully refuting it)
The first argument is the most common I’ve seen. To refute it, just think: is widespread disagreement always mean that there is no real answer? Young Earth Creationists believe the earth is roughly 6000-7000 years old. However, those beliefs obviously don’t nullify the scientific data about the earth’s age, and definitely do not make the earth’s age unknowable or non-existent. So, it is possible that moral truths are disagreed upon, yet could still be knowable and exist.
To respond to the second argument, I’d establish a “goal” of morality. Imagine that morality, as a social system, is ultimately aimed toward a certain goal. This could be promoting a certain positive value, protecting certain rights, etc. If this goal for morality is established, then morality could be determined and judged by its effectiveness in working toward that goal. It wouldn’t matter whose opinion differed, or who established the best methods/rules so long as the system of morality ultimately worked.
The obvious question from here is what the goal of morality would be. Utilitarianism is the only well-known moral theory with a clear goal: aggregate happiness. I don’t entirely agree, but I feel its close. A key note is that utilitarians, notably John Stuart Mill, saw happiness as a deeper wellness than just physical sensations.
The main problem I have with focusing solely on happiness is that it values the rights of happier people more so than unhappy people. My solution, and what I base a lot of my ethical views on, is to value opportunities for happiness and protection from harm. Basically, morality should be set up to promote the opportunity for people to do the things that make them happy, and protect them from things that could harm them. The way to do this is through establishing basic human rights. Since killing is a harm to people, everyone should have a right to life. If people are universally happier when they are able to choose their own spouse, then everyone should have the right to choose their own spouse.
I’ll go into greater detail once I figure out my ethics entirely, but it’s been a bit difficult so far. I have a few kinks to go through before I feel satisfied enough to outline it as a complete ethical theory.
In my last post I talked about the difference between moral obligation and moral permissibility. Using that post as a guide, I want to focus on the cases where a person is morally obligated to preform an action.
An action is morally obligated if every alternative to that action is morally impermissible. Understanding what people are morally obligated to do, however, is tricky. For example, few would deny that a person is morally obligated to save a drowning child when passing by that child, even if it means ruining a $50 pair of pants. However, not many are willing to claim that we are morally obligated to donate our $50 to feed a starving child rather than buy a pair of pants.
So what makes the person walking by the drowning child morally obligated to lose $50, but not the jean shopper? I have an answer to this question that deals with group obligations/control of the situation, but I don’t want to focus on this specific case and its judgment. Rather, I want to focus on the method we use to come to that judgment.
Intuition is likely the most commonly used method to come to moral judgments. However, conflicting intuitions seem to demand a better method of forming moral judgments. Or, at the very least, they demand a better method of understanding/defending our intuitive moral judgments.
My solution to this problem will admittedly rely heavily on intuition. The key point here to remember is how much morality relies on intuition, especially the intuitive idea that we should value the lives of others. (Side track: I firmly believe that reason without intuition/empathy leads to ethical egoism. Any view that relies on “you’re better off acting morally” is ultimately egoistic. I’ll talk about this more in a future post)
So here’s my solution: to understand what people are morally obligated to do, you have to intuitively judge what you would be obligated to do in that situation. It sounds simple enough: if you claim that person X is morally obligated to give up his kidney to save Person Y, then you are claiming that you would be morally obligated to give up your kidney to save Person Y.
Here’s why I’m making this point: Consider what you would do to save a person you cared about. If forced into the situation, would you kill two innocent strangers to save your wife/husband(WH)? If not, would you kill your WH to save two innocent strangers? I believe most people would honestly answer yes/no, or possibly no/no. However, using a rough moral calculus, the option where two people live has a higher moral value than the option where only one person lives (resulting in your WH dying in both situations). Using a deontological theory, the moral choice in both is to refrain from any action (resulting in your WH dying in situation 1). It seems that, whichever theory you believe in, there is going to be a case where the morally better option is one where your wife/husband dies.
For the intuitive answers to be immoral (morally impermissible), you must be morally obligated to choose the option with greater moral value. This means killing your WH in situation 2 and letting them die in situation 1. I don’t believe this is plausible. Put simply, I believe most people would rather be immoral than follow the utilitarian or deontologist in the above situations. We would not condemn ourselves for choosing to save our WH over the lives of two strangers.
If our intuitions are correct, then we are not always morally obligated to choose the option with a greater objective moral value. I believe the above method, where objective moral obligations are determined by personal morality, is the best way to understand these situations.
- The purpose of all this is that I believe all moral dilemmas can be simplified into two questions:
1. Which option has the greatest moral value.
2. Given #1, what are the people involved morally obligated to do?
I believe the answer to 1 is consequentionalist, and this is where most moral debates take place. I believe intuitive judgments on what we would be obligated to do ourselves can answer #2.
One of my biggest issues with normative ethical theories (like utilitarianism and deontology) is that they don’t address the difference between what one is morally obligated to do, and what is morally permissible. Utilitarianism particularly is guilty of this. If an action brings about greater happiness, you have to do it. If an action brings about more sadness, you can’t do it. But this isn’t intuitive at all, there have to be certain actions that are morally good but not morally required. Here’s an example:
1. You have $300. You need to pay some bills and buy food for yourself, and you also want to spend a little on seeing a movie. Paying these expenses will bring you some happiness. However, the $300 will create more happiness in others if you donate it all. So, are you morally obligated to donate your money?
2. Your child needs a life-saving surgery that costs $300. You want to use it for an upgrade of your car stereo. Are you morally obligated to pay for your child’s surgery?
Intuitively, most of us would claim that in #1 you are morally allowed to keep the money for ourselves, as anyone who is reading this from a purchased computer believed this idea. We certainly praise people who donate all their money (meaning that the donation has greater moral value), but we don’t obligate people to make the donation.
On the other hand, we would condemn anyone who didn’t spend the $300 on their children’s surgery. Doing so is morally obligatory, and spending the $300 on yourself is morally impermissible.
To clarify, a good way to think about it is an action is morally obligatory if the alternative is morally impermissible. So there are two types of moral dilemmas: ones where either action is morally permissible, and ones where one action is morally obligatory and the other is morally impermissible.
Deontology understand this difference a little better. All actions are either morally permissible or morally impermissible, depending on Kan’ts categorical imperatives. However, deontology does not classify positive actions as morally obligatory, rather it focuses on actions that are morally obligatory not to do.
So the question remaining: when are actions merely morally better versus morally obligatory? I don’t have a nice straightforward answer yet, other than simple intuition. This post is more about pointing out the flaws in the popular ethical theories. Oh and also kinda announcing that I’m in the works on a book about ethics, that I’d publish on Amazon. Hopefully by the time I finish it I’ll be able to answer this question better.
The Prisoner’s Dilemma is a scenario in which individually rational decisions make everyone worse off. The classic example is this:
Imagine that you are one of two prisoners in police custody, being kept in separate rooms. The police officer tells you that if you confess to the crime, you will get an easier sentence. If you don’t, your sentence will be harsher. However, if neither prisoner confesses, the police only have enough evidence to put you away for a short time. So, to give this numbers, imagine that:
If neither of you confess, you both get 3 years in jail.
If you confess but he doesn’t, you get 1 year in jail and he gets 30 years.
If he confesses but you don’t, you get 30 years in jail and he gets 1.
If both of you confess, you both get 15 years in jail.
-The first thing you should notice: regardless of whether or not the other prisoner confesses, you will be better off by confessing. If he doesn’t, you’ll get 1 year instead of 3. If he does, you’ll get 15 years instead of 30. If you have no emotional attachment to the other prisoner, it is individually rational to choose to confess, thereby guaranteeing a lesser sentence for you. However, both prisoners can have this logic. Both do what is most rational for them, so both get 15 years. If both were able to do what is not individually rational for them, they would have gotten 3 years each.
Its an interesting case, but I want to take the format and apply it to consequentionalist ideas. Specifically, I want to prove that individually moral decisions can lead to an overall unjust society.
Imagine that all hospitals in the world are run by the same group. The group mandates a rule that, if doctors desire to, they are allowed to kill patients for their organs if it will save at least two other patients. From this, doctors are faced with individual decisions: should they kill a patient for their organs in order to save three other patients, all of whom have no willing donor and will die soon?
There are some who would disagree (especially if they agree with my right to life vs. remaining alive distinction), but most consequenalist theories would see saving three patients as more morally valuable than keeping one alive. Therefore, killing one to save three is a morally just decision.
Now imagine that doctors throughout the world are doing this. Each time, they kill one person in order to donate their organs to save more people. Each individual decision is morally just based on consequences because more people are saved than killed each time. However, hospitals around the world are now getting a reputation of patient killing, so millions of people are now avoiding hospital trips in fear of doctors killing them. Due to this, more people die from avoiding hospital trips than were saved by the organ donation.
Hopefully the Prisoner’s Dilemma here is clear. Using a purely life based valuation system, each individual decision by the doctor had the better consequences. Further, no single decision can be pointed to as the “cause” of the widespread panic about hospitals. These individually justified decisions result in a worse overall state.
The fault here, that I believe consequentionalist theories wrongly ignore, is the system that the doctors are in. The injustice in the above scenario was the original decision by the medical group to allow doctors to kill their patients for organs. This is purposefully obvious in the above scenario, but is less obvious in other ethical dilemmas. I’m not going to go into detail about it here, but I wanted to point out this flaw in common ethical theories (this problem is even more apparent in deontology). Basically, what I believe ethical theories should focus on is the system that people live in, and what ethical rules people should follow. Individuals then make judgments on which option is more morally valuable based on the values of the system.
-In other news, I’m going to be researching modern consequentionalist theories over summer. I’ve had a few blog posts like this one where I put small parts of my own ethical theory, and hopefully by the end of summer I’ll have a complete theory.
My own ethical theory has some utilitarian ideas, but I want to attack and hopefully refute the idea of solely valuing happiness. My counterexample involves a person who biologically cannot experience happiness.
Imagine a man who since birth has been unable to experience any sort of pleasure or happiness. His mood can only shift from pain to apathy, and various levels in between. He is unable to experience any form of happiness due to a genetic problem in his brain, and this problem cannot be fixed. Further, this man is on an isolated island without any other inhabitants.
While the man is clearly unhappy, he does not want to die. He reasons that even though his life is full of unhappiness, he would rather be alive than dead. Does he have the right not to be killed?
Consider the classic utilitarian viewpoint: the moral action is the one that maximizes aggregate happiness, or the one that prevents the most aggregate pain. In this scenario, the man’s life creates no happiness. He cannot experience any himself, and there are no other people on the island that could get happiness from his life. His life only has the possibility of pain. Therefore, killing him would result in less aggregate pain than not killing him. From this, utilitarianism would have to claim that killing him is the moral action.
This seems clearly counterintuitive. The obvious factor that classic utilitarianism ignores is the man’s right to life. Even if his life has no value to others, and gives himself no happiness, it is still his right to not be killed. Since classic utilitarianism would have to argue that killing the man is the moral action, classic utilitarianism is clearly false.
The voting problem is simple: why should an individual vote, knowing that their singular vote won’t affect the overall results whatsoever?
For example, let’s take a U.S. presidential election. The voting will never be decided by one vote, so an individual voter has no impact on any of the ballot results. Should they then conclude that voting as a whole is not worthwhile, so they should never vote?
The reason this is a problem: imagine that there’s a giant vote between consequentionalism and deontology. Each of the deontologists knows that the vote is valuable because if everyone chose not to vote, the system wouldn’t work. So they all decide to vote. The consequentionalists each think “my vote isn’t going to actually affect the vote, and my vote won’t affect whether or not other people vote”. So each individual consequentionalist reasonably concludes that their vote has no value, and therefore decides not to vote. The end result: deontology wins, without a single vote against it. But is this how it should be?
Now imagine that there is a cause worth voting for, say to vote for the replanting of a rainforest. If the vote succeeds, then the rainforest will be replanted at almost no cost, and the benefit to the environment will be huge. Now, how do you convince each individual voter that their vote matters? “Every vote counts” is simply not true, there has never been and never will be a large-scale election decided by one vote (especially when considering that voting counters have margins of error). However, there needs to be some justification to get people to vote. If either the Democratic or Republican party could convince every single one of their supporters to vote, they would never lose an election.
So the problem becomes: how do you give value to an individual vote that has no positive consequences and still maintain that consequences are the only morally valuable matter (as I believe to an extent)?
To help illustrate my response, imagine a scenario where an innocent man is being stoned to death. Image an individual, Will, that is part of the crowd stoning him. The crowd, including Will, knows that the man is innocent and is stoning him for sport. Will throws one individual stone at the innocent man. The man would’ve died with or without Will throwing that individual stone. Also, in the barrage of stones, the man did not notice or particularly feel pain from the individual stone that Will threw. Will’s action, throwing the individual stone, did not affect the overall consequences of the situation in any way. However, was his action immoral? I want to say that yes, it was, and still maintain my consequentionalist viewpoint.
To be clear, my form of consequentionalsim (value utilitarianism for those who have read my past posts) is a form of motive consequentionalism. Basically, the motive for positive consequences is what matters, not the actual consequences. For example, imagine that you see a bus heading for a small child in the street. You run to save the child, and push him out of the path of the bus. Your motive is to prevent the child from being hit by the bus, which is a motive for a positive consequence. Now imagine that you push him out of the way of the bus, but the child then ends up being hit by a speeding truck and the bus was able to stop in time. Was your action morally wrong, since the child would have had the better consequence if you had done nothing? I think the obvious answer is no, since you acted with the will to save the child’s life, your action was still morally right. To describe this, I refer to this as the will to consequence, or the will to promote good consequences.
So, for Will’s case, Will does not affect any consequences in the situation. Further, he can easily reason beforehand that his action will not affect the overall outcome. So initially, it appears that Will does not have any will to negative consequence. To justify my condemnation of Will’s action, I am going to separately quantify the consequences and the aggregate action.
The aggregate action of the crowd is the stoning of the innocent man. If the crowd was one entity, then the motive would be to stone the innocent man. The intended consequences, the death and pain of an innocent main, are negative. So the aggregate action is immoral because it intends to produce negative consequences. Will, along with each other individual in the crowd, is then guilty of “participating” in the aggregate action. I call this a will to participate. The aggregate action is justifiably immoral because of the immoral consequences it produces. The individual throwing the stone is immoral because they participate in an immoral action, the aggregate action. In this way, Will is guilty of an immoral action even without intending to directly create any negative consequences, and still the only negative value in the equation is derived from the consequences.
I’m going to use the same logic to justify giving value to an individual vote. The aggregate action is the collection of voting, and the aggregate action produces a positive consequence (say the replanting of trees). Each individual voter has the will to participate in the aggregate action, and therefore each individual voter is performing a good action. The value is still derived from the consequences, even though the individual voter does not directly affect the consequences.
I feel this philosophy is important because of the contrast between individual actions and global matters. Most individuals do not affect matters on a global scale. However, I see this in a similar way. A “good” global society would require each individual to contribute to the overall good, yet the individual affect would still result in nothing. I’ll go into this more deeply in the future.
In my post on cultural relativism I mentioned how there are certain “rules” we can judge all moral actions by, regardless of the society. A blanket statement like “killing is wrong” is false, as evident by such cases as self-defense. However, a statement like “killing is wrong when it isn’t done for a greater purpose” might be possible.
To start with, I want to state that there absolutely has to be a system, or “formula” to determine what actions are moral. For moral truths, there are two options: either we can know moral truths, or we can’t know moral truths. This is the dilemma between moral skepticism and moral knowledge proponents. My argument for moral truth is this: if we can know moral knowledge, there has to be a way that we can determine moral knowledge.
Consider two hypothetical cases of a moral dilemma, A and B. Suppose that the current “moral formula” says that Option 1 for dilemma A is correct, and Option 1 for dilemma B is correct. Let’s say that, upon further reflection, it appears that Option 1 is correct for A, but Option 2 is in fact the correct option for B. These cases are typically brought up to counter a well known moral formula (such as the case of the utility monster against utilitarianism). However,what basis is there to say that Option 2 is correct for B? In order to argue for 2, you would have to use some sort of formula to determine that Option 2 is better than Option 1. Meaning that, you can’t say that “2 is better than 1″, you would have to say that “2 is better than 1 because of _____”. This would only prove that the formula is incorrect. My opinion is that, there has to be some sort of formula that does not have any objections or inaccuracies. If there are successful counterexamples, then the logic used to determine the correct counterexample would be the better formula. However, this has to end somewhere, which in my opinion would be the universal formula for morality. If this formula is impossible to find, then I would argue that moral skepticism is correct.
My personal theory is value utilitarianism (which I’ve written about in my previous posts). Basically, it’s a consequentionalist theory that defines the value of consequences as human rights, group bonds, and happiness. To prove my point about the need for a universal formula, if anyone can think of a counterexample to value utilitarianism please post it in the comments.
Utilitarianism is a moral theory that revolves around the maximization of happiness. All moral decisions are made by the use of the “principle of utility”, which claims that the right course of action is the one that maximizes aggregate happiness. So whenever there’s a moral dilemma, you have to figure out which option would either cause the most happiness or cause the least amount of pain.
First, there are a lot of great ideas in utilitarianism. It is a consequentionalist theory, meaning the morality of each action is determined by its consequences (as opposed to deontologist theories, where the morality of actions are judged on the value of the action alone). I think there is a simple way to prove consequentionalism: consider a scenario where you know you have to kill an innocent person in order to save the lives of a billion people (similar to the drifter scenario). Any theory that ignores the consequences, in this case the loss of countless lives, cannot be seriously considered as a good moral theory.
Another good point of utilitarianism is the promotion of happiness. I’m not a hedonist, but I do think that the value of happiness is underrated. Utilitarianism emphasizes that you have to only value things that are good intrinsically. The conclusion is then that the only thing that is intrinsically valuable, and thus should be valued, is happiness.
There are two main problems with utilitarianism: the demand for impartiality and being solely hedonistic.
Utilitarianism states that you have to judge everyone’s happiness equally. This concept is good in that you can’t do something good for yourself when it would cause another person pain. However, it demands too much since everyone has personal obligations to family, community, and other groups. Consider a man whose income is the sole support for his family of four. He thinks to himself: I can either take my family out to dinner, or donate the $100 or so dollars to a charity that will save lives in Africa. The lives saved in Africa will create more happiness than the dinner, so the man would have to donate the money according to utilitarianism. You could take this scenario as far as necessary to make the point. Would there ever be an amount of money that the man could use on his family that would create more happiness than if he used it to save lives?
Even though it is noble to donate to charity, it should not be morally required to donate everything you earn to charity. However you calculate it, it has to be morally permissible, if not morally required, to put your family first.
Lastly, there is a clear objection to the principle of utility. Consider a proposition to enslave 1% of the population. If done, this would ease enough burden for the remaining 99% of the population that overall happiness would increase (meaning that the unhappiness of the 1% would be outweighed by the happiness of the 99%). Would this proposition be morally good? The utilitarian view would have to be yes. Aggregate happiness is increased, so the proposition is morally good. However, this is obviously incorrect. Most people correctly believe that slavery is wrong even if the benefits are there. This is because we value individual rights, which utilitarianism does not recognize.
I believe these problems can be fixed, and will post my own theory once I’ve figured out the kinks.