text

Sunday, June 24, 2012

Proposed Failures of Utility Theory

There have been many claims made throughout history that utility and expected utility theory do not accurately formalize human decision making, partially due to the beliefs that utility theory assumes rationality and that humans are not always rational.

To begin examining these claims, we must first establish a set definition of rationality. According to Wikipedia,  a rational decision is not just one that is reasoned, but one that is optimal for solving a problem, or achieving a goal. Thus, whether a decision is rational or not ultimately depends on the nature of the situation.  A rational decision in one case may be an irrational one in another case. Also, not only does the definition of a rational decision change with the problem being solved, but also with the individual. Since a rational decision is one that maximizes utility, and since everyone's utility function is assumed to be largely unique, a supposedly rational decision for one person to make in a specific situation may be irrational for another person in that same situation.

But something interesting arises from this conception. If the rationality of a decision depends on the individual's utility function, and everyone's utility function is unique, does this not imply that we, as outside observers, can never pass judgment of rationality or irrationality on another individual's decisions?

By this very definition of rationality, I contest that it is impossible for any failures of utility theory ever to occur. A failure of utility theory would imply that an individual was unable to maximize his expected utility through reasoned decision making, but, by definition, the only person who can really know if he has maximized his utility is the individual, since his utility function is specific to him.

But then, how do we explain individuals regretting their decisions, or being otherwise unhappy with the choices that they have made? The answer has to do with foresight and incomplete information. I contest that each individual makes the best decisions for himself 100% of the time, given the data and forward looking ability that he has available to him at that current moment. The maximization of a shorter term utility function can lead to very different actions than the maximization of a longer term utility function, and likewise for varying sets of information.

One common scenario often depicted involves expected value experiments, in which, for example, a person could either take $50 outright, or play a 50-50 chance game where he could possibly win, say $120, or walk away with nothing. The conclusion often found is that most people will choose to take the $50 straight up instead of playing the game. This is worth a thought, since the expected value of the wager is $60, and it is thus superior to the first choice in terms of pure monetary value. However, interpreting the results in this way places a very narrow conception on the idea of utility. Utility doesn't just base itself on the total amount of profit received. It also has to do with the stress levels associated with possibly losing a wager, with the lingering doubt that the wager is even fair, etc. All these factors have different weights on them depending on the individual, so all that the experiment ultimately shows is that more people have utility functions of a form that places higher weight on the dis-utility of experiencing the stress and uncertainty inherent in a wager.

Another fault that people often find with utility theory is that it cannot account for human unselfishness. These arguments are equally unfounded, and for mainly the same reason. I believe this is again not so much a failure of the utility model, as it is a failure of extending the concept of utility far enough into the future. 

For instance, the example of parents taking care of children is often cited an infallible demonstration of good will. It can be argued that the reason parents care for their children is because children could potentially provide them with certain benefits in the future, such as money if they're successful, or simply by taking care of them when they're older. Another way to think about it is that not taking care of one's children results in serious negative consequences, consequences that far outweigh the hardships of raising them. In this sense, I believe every altruistic action that humans are capable of can be argued to ultimately be for utility maximization purposes. It's only a matter of how far into the future that we would like to consider. Another often cited example is donating to charity. Is this not completely altruistic? Not quite. Upon closer consideration, we can see that there is something in an individual's utility function that causes him to want to donate. He may derive some form of happiness from giving to others, or fulfill some moral obligation that would otherwise nag at him incessantly.

Thus, this individual doesn't donate to charity for the sake of doing something selfless, but because it either makes him feel better, gives him peace of mind, etc. Consequently, I believe any action can be boiled down to these "selfish" incentives, and I do not think there is anything morally wrong with that. We are simply wired to make decisions based on introspection and self-involvement. It is what has worked best for our species for many generations and is clearly the method of survival heretofore chosen by evolution.

No comments:

Post a Comment

Search This Blog