The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences – Revised Edition
Author: Herbert Gintis (Author)
ASIN: 0691160848
Publisher: Princeton University Press
Publication date: 2014-04-20
Edition: Revised
Language: English
Print length: 284 pages
ISBN-10: 9780691160849
ISBN-13: 978-0691160849
Book Description
Game theory is central to understanding human behavior and relevant to all of the behavioral sciences―from biology and economics, to anthropology and political science. However, as The Bounds of Reason demonstrates, game theory alone cannot fully explain human behavior and should instead complement other key concepts championed by the behavioral disciplines. Herbert Gintis shows that just as game theory without broader social theory is merely technical bravado, so social theory without game theory is a handicapped enterprise. This edition has been thoroughly revised and updated.
Reinvigorating game theory, The Bounds of Reason offers innovative thinking for the behavioral sciences.
Review
“Gintis’ work reflects an amazing breadth of knowledge of the behavioural sciences. He is ever ready to pose unusual questions and to defend unorthodox proposals.
The Bounds of Reason is Gintis’ most ambitious project to date, one that draws upon all of his extraordinary originality and learning.”—Peter Vanderschraaf, Journal of Economics and Philosophy“The book is a combination of an excellent textbook on game theory and an innovation treatise advocating the unification of the behavioural sciences and refounding of game theory on different epistemic foundations. . . . It is clearly an important contribution to the current debate over the rational actor model that the rise of behaviourial economics has provoked.” ―
OxonomicsReview
“Herbert Gintis makes a strong case that game theory―by predicting social norms―provides an essential tool for understanding human social behavior. More provocatively, Gintis suggests that humans have a genetic tendency to follow social norms even when it is to their disadvantage. These claims will be controversial―but they make for fascinating reading.”
―Eric S. Maskin, Nobel Laureate in Economics“Recent findings in experimental economics have highlighted the need for a rigorous analytical theory of choice and strategic interaction for the social sciences that captures the unexpectedly wide variety of observed behaviors. In this exciting book, Gintis convincingly argues that an empirically informed game-theoretic approach goes a long way toward achieving this attractive goal.”
―Ernst Fehr, University of Zurich“This brave and sweeping book deserves to be widely and carefully read.”
―Adam Brandenburger, New York University“
The Bounds of Reason makes a compelling case for game theory but at the same time warns readers that there is life beyond game theory and that all social science cannot be understood by this method alone. This splendid book makes skillful use of figures and algebra, and reads like a charm.”―Kaushik Basu, Cornell University“Excellent and stimulating,
The Bounds of Reason is broad enough to encompass the central concepts and results in game theory, but discerning enough to omit peripheral developments. The book illustrates deep theoretical results using simple and entertaining examples, makes extensive use of agent-based models and simulation methods, and discusses thorny methodological issues with unusual clarity.”―Rajiv Sethi, Barnard College, Columbia UniversityFrom the Back Cover
“Gintis contributes importantly to a new insight gaining ascendancy: economy is about the unintended consequences of human sociality. This book is firmly in the revolutionary tradition of David Hume (Convention) and Adam Smith (Sympathy).”–Vernon L. Smith, Nobel Prize-winning economist
“Herbert Gintis makes a strong case that game theory–by predicting social norms–provides an essential tool for understanding human social behavior. More provocatively, Gintis suggests that humans have a genetic tendency to follow social norms even when it is to their disadvantage. These claims will be controversial–but they make for fascinating reading.”–Eric S. Maskin, Nobel Laureate in Economics
“Recent findings in experimental economics have highlighted the need for a rigorous analytical theory of choice and strategic interaction for the social sciences that captures the unexpectedly wide variety of observed behaviors. In this exciting book, Gintis convincingly argues that an empirically informed game-theoretic approach goes a long way toward achieving this attractive goal.”–Ernst Fehr, University of Zurich
“This brave and sweeping book deserves to be widely and carefully read.”–Adam Brandenburger, New York University
“The Bounds of Reason makes a compelling case for game theory but at the same time warns readers that there is life beyond game theory and that all social science cannot be understood by this method alone. This splendid book makes skillful use of figures and algebra, and reads like a charm.”–Kaushik Basu, Cornell University
“Excellent and stimulating, The Bounds of Reason is broad enough to encompass the central concepts and results in game theory, but discerning enough to omit peripheral developments. The book illustrates deep theoretical results using simple and entertaining examples, makes extensive use of agent-based models and simulation methods, and discusses thorny methodological issues with unusual clarity.”–Rajiv Sethi, Barnard College, Columbia University
About the Author
Excerpt. © Reprinted by permission. All rights reserved.
The Bounds of Reason
Game Theory and the Unification of the Behavioral Sciences
By Herbert Gintis
PRINCETON UNIVERSITY PRESS
Copyright © 2009 Princeton University Press
All rights reserved.
ISBN: 978-0-691-16084-9
Contents
Preface, xi,
1 Decision Theory and Human Behavior, 1,
2 Game Theory: Basic Concepts, 33,
3 Game Theory and Human Behavior, 48,
4 Rationalizability and Common Knowledge of Rationality, 86,
5 Extensive Form Rationalizability, 106,
6 The Logical Antinomies of Knowledge, 123,
7 The Mixing Problem: Purification and Conjectures, 131,
8 Bayesian Rationality and Social Epistemology, 142,
9 Common Knowledge and Nash Equilibrium, 156,
10 The Analytics of Human Sociality, 174,
11 The Unification of the Behavioral Sciences, 194,
12 Summary, 221,
13 Table of Symbols, 224,
References, 226,
Subject Index, 254,
Author Index, 258,
CHAPTER 1
Decision Theory and Human Behavior
People are not logical. They are psychological. Anonymous
People often make mistakes in their maths. This does not mean that we should abandon arithmetic.
Jack Hirshleifer
Decision theory is the analysis of the behavior of an individual facing nonstrategic uncertainty—that is, uncertainty that is due to what we term “Nature” (a stochastic natural event such as a coin flip, seasonal crop loss, personal illness, and the like) or, if other individuals are involved, their behavior is treated as a statistical distribution known to the decision maker. Decision theory depends on probability theory, which was developed in the seventeenth and eighteenth centuries by such notables as Blaise Pascal, Daniel Bernoulli, and Thomas Bayes.
A rational actor is an individual with consistent preferences (§1.1). A rational actor need not be selfish. Indeed, if rationality implied selfishness, the only rational individuals would be sociopaths. Beliefs, called subjective priors in decision theory, logically stand between choices and payoffs. Beliefs are primitive data for the rational actor model. In fact, beliefs are the product of social processes and are shared among individuals. To stress the importance of beliefs in modeling choice, I often describe the rational actor model as the beliefs, preferences and constraints model, or the BPC model. The BPC terminology has the added attraction of avoiding the confusing and value-laden term “rational.”
The BPC model requires only preference consistency, which can be defended on basic evolutionary grounds. While there are eminent critics of preference consistency, their claims are valid in only a few narrow areas. Because preference consistency does not presuppose unlimited information-processing capacities and perfect knowledge, even bounded rationality (Simon 1982) is consistent with the BPC model. Because one cannot do behavioral game theory, by which I mean the application of game theory to the experimental study of human behavior, without assuming preference consistency, we must accept this axiom to avoid the analytical weaknesses of the behavioral disciplines that reject the BPC model, including psychology, anthropology, and sociology (see chapter 11).
Behavioral decision theorists have argued that there are important areas in which individuals appear to have inconsistent preferences. Except when individuals do not know their own preferences, this is a conceptual error based on a misspecification of the decision maker’s preference function. We show in this chapter that, assuming individuals know their preferences, adding information concerning the current state of the individual to the choice space eliminates preference inconsistency. Moreover, this addition is completely reasonable because preference functions do not make any sense unless we include information about the decision maker’s current state. When we are hungry, scared, sleepy, or sexually deprived, our preference ordering adjusts accordingly. The idea that we should have a utility function that does not depend on our current wealth, the current time, or our current strategic circumstances is also not plausible. Traditional decision theory ignores the individual’s current state, but this is just an oversight that behavioral decision theory has brought to our attention.
Compelling experiments in behavioral decision theory show that humans violate the principle of expected utility in systematic ways (§1.5.1). Again, it must be stressed that this does not imply that humans violate preference consistency over the appropriate choice space but rather that they have incorrect beliefs deriving from what might be termed “folk probability theory” and make systematic performance errors in important cases (Levy 2008).
To understand why this is so, we begin by noting that, with the exception of hyperbolic discounting when time is involved (§1.2), there are no reported failures of the expected utility theorem in nonhumans, and there are some extremely beautiful examples of its satisfaction (Real 1991) Moreover, territoriality in many species is an indication of loss aversion (Gintis 2007b). The difference between humans and other animals is that the latter are tested in real life, or in elaborate simulations of real life, as in Leslie Real’s work with bumblebees (Real 1991), where subject bumblebees are released into elaborate spatial models of flowerbeds. Humans, by contrast, are tested using imperfect analytical models of real-life lotteries. While it is important to know how humans choose in such situations, there is certainly no guarantee they will make the same choices in the real-life situation and in the situation analytically generated to represent it. Evolutionary game theory is based on the observation that individuals are more likely to adopt behaviors that appear to be successful for others. A heuristic that says “adopt risk profiles that appear to have been successful to others” may lead to preference consistency even when individuals are incapable of evaluating analytically presented lotteries in the laboratory. Indeed, a plausible research project in extending the rational actor model would be to replace the assumption of purely subjective prior (Savage 1954) with the assumption that individuals are embedded in a network of mind across which cognition is more or less widely distributed (Gilboa and Schmeidler 2001; Dunbar et al. 2010; Gintis 2010).
In addition to the explanatory success of theories based on the BPC model, supporting evidence from contemporary neuroscience suggests that expected utility maximization is not simply an “as if” story. In fact, the brain’s neural circuitry actually makes choices by internally representing the payoffs of various alternatives as neural firing rates and choosing a maximal such rate (Shizgal 1999; Glimcher 2003; Glimcher and Rustichini 2004; Glimcher et al. 2005). Neuroscientists increasingly find that an aggregate decision making process in the brain synthesizes all available information into a single unitary value (Parker and Newsome 1998; Schall and Thompson 1999). Indeed, when animals are tested in a repeated trial setting with variable rewards, dopamine neurons appear to encode the difference between the reward that the animal expected to receive and the reward that the animal actually received on a particular trial (Schultz et al. 1997; Sutton and Barto 2000), an evaluation mechanism that enhances the environmental sensitivity of the animal’s decision making system. This error prediction mechanism has the drawback of seeking only local optima (Sugrue et al. 2005). Montague and Berns (2002) address this problem, showing that the orbitofrontal cortex and striatum contain a mechanism for more global predictions that include risk assessment and discounting of future rewards. Their data suggest a decision-making model that is analogous to the famous Black-Scholes options-pricing equation (Black and Scholes 1973).
The existence of an integrated decision-making apparatus in the human brain itself is predicted by evolutionary theory. The fitness of an organism depends on how effectively it make choices in an uncertain and varying environment. Effective choice must be a function of the organism’s state of knowledge, which consists of the information supplied by the sensory inputs that monitor the organism’s internal states and its external environment. In relatively simple organisms, the choice environment is primitive and is distributed in a decentralized manner over sensory inputs. But in three separate groups of animals, craniates (vertebrates and related creatures), arthropods (including insects, spiders, and crustaceans), and cephalopods (squid, octopuses, and other mollusks), a central nervous system with a brain (a centrally located decision-making and control apparatus) evolved. The phylogenetic tree of vertebrates exhibits increasing complexity through time and increasing metabolic and morphological costs of maintaining brain activity. Thus, the brain evolved because larger and more complex brains, despite their costs, enhanced the fitness of their carriers. Brains therefore are ineluctably structured to make consistent choices in the face of the various constellations of sensory inputs their bearers commonly experience.
Before the contributions of Bernoulli, Savage, von Neumann, and other experts, no creature on Earth knew how to value a lottery. The fact that people do not know how to evaluate abstract lotteries does not mean that they lack consistent preferences over the lotteries that they face in their daily lives.
Despite these provisos, experimental evidence on choice under uncertainty is still of great importance because in the modern world we are increasingly called upon to make such “unnatural” choices based on scientific evidence concerning payoffs and their probabilities.
1.1 Beliefs, Preferences, and Constraints
In this section we develop a set of behavioral properties, among which consistency is the most prominent, that together ensure that we can model agents as maximizers of preferences.
A binary relation [??]A on a set A is a subset of A × A. We usually write the proposition (x, y) [member of] [??]A as x [??]Ay. For instance, the arithmetical operator “less than” (<) is a binary relation, where (x, y) [member of] x<y. A preference ordering [≥.sub.A] on A is a binary relation with the following three properties, which must hold for all x, y, z [member of] A and any set B [subset] A:
1. Complete: x [≥.sub.A] y or y [≥.sub.A] x;
2. Transitive: x [≥.sub.A] y and y [≥.sub.A] z imply x [≥.sub.A] z;
3. Independent of Irrelevant Alternatives: For x, y [member of] B, x [≥.sub.B] y if and only if x [≥.sub.A] y.
Because of the third property, we need not specify the choice set and can simply write x ≥ y. We also make the behavioral assumption that given any choice set A, the individual chooses an element x [member of] A such that for all y [member of] A, x ≥ y. When x ≥ y, we say “x is weakly preferred to y.”
The first condition is Completeness, which implies that any member of A is weakly preferred to itself (for any x in A, x ≥ x). In general, we say a binary relation [??] is reflexive if, for all x, x [??] x. Thus, completeness implies reflexivity. We refer to ≥ as “weak preference” in contrast with “strong preference” >. We define x >y to mean “it is false that y ≥ x.” We say x and y are equivalent if x ≥ y and y ≥ x, and we write x [??] y. As an exercise, you may use elementary logic to prove that if ≥ satisfies the completeness condition, then > satisfies the following exclusion condition: if x >y, then it is false that y >x.
The second condition is Transitivity, which says that x ≥ y and y ≥ z imply x ≥ z. It is hard to see how this condition could fail for anything we might like to call a preference ordering. As a exercise, you may show that x >y and y ≥ z imply x ≥ z, and x ≥ y and y >z imply x >z. Similarly, you may use elementary logic to prove that if ≥ satisfies the completeness condition, then [??] is transitive (i.e., satisfies the transitivity condition).
The third condition, Independence of Irrelevant Alternatives means that the relative attractiveness of two choices does not depend upon the other choices available to the individual. For instance, suppose an individual generally prefers meat to fish when eating out, but if the restaurant serves lobster, the individual believes the restaurant serves superior fish, and hence prefers fish to meat, even though he never chooses lobster; thus, Independence of Irrelevant Alternatives fails. In such cases, the condition can be restored by suitably refining the choice set. For instance, we can specify two qualities of fish instead of one, in the preceding example. More generally, if the desirability of an outcome x depends on the set A from which it is chosen, we can form a new choice space Ω*, elements of which are ordered pairs (A, x), where x [member of] A [subset or equal to] Ω, and restrict choice sets in Ω* to be subsets of Ω* all of whose first elements are equal. In this new choice space, Independence of Irrelevant Alternatives is satisfied.
The most general situation in which the Independence of Irrelevant Alternatives fails is when the choice set supplies independent information concerning the social frame in which the decision-maker is embedded. This aspect of choice is analyzed in §1.4, where we deal with the fact that preferences are generally state-dependent; when the individual’s social or personal situation changes, his preferences will change as well. Unless this factor is taken into account, rational choices may superficially appear inconsistent.
When the preference relation ≥ is complete, transitive, and independent of irrelevant alternatives, we term it consistent. If ≥ is a consistent preference relation, then there will always exist a preference function such that the individual behaves as if maximizing this preference function over the set A from which he or she is constrained to choose. Formally, we say that a preference function u : A ->Rrepresents a binary relation ≥ if, for all x, y [member of] A, u(x) ≥ u(y) if and only if x ≥ y. We have the following theorem.
Theorem 1.1 A binary relation ≥ on the finite set A of payoffs can be represented by a preference function u : A ->Rif and only if ≥ is consistent.
It is clear that u(·) is not unique, and indeed, we have the following theorem.
THEOREM 1.2 If u(·) represents the preference relation ≥ and f(·) is a strictly increasing function, then v(·) f (u(·)) also represents ≥. Conversely, if both u(·) and v(·) represent ≥, then there is an increasing function f(·) such that v(·) = f (u(·)).
The first half of the theorem is true because if f is strictly increasing, then u(x) > u(y) implies v(x) = f (u(x)) > f (u(y)) = v(y), and conversely. For the second half, suppose u(·) and v(·) both represent ≥, and for any y [member of] R such that v(x) = y for some x [member of] X, let f (y) = u(v-1(y)), which is possible because v is an increasing function. Then f(·) is increasing (because it is the composition of two increasing functions) and f(v(x)) = u(v-1(v(x))) = u(x), which proves the theorem.
1.1.1 The Meaning of Rational Action
The origins of the BPC model lie in the eighteenth century research of Jeremy Bentham and Cesare Beccaria. In his Foundations of Economic Analysis (1947), economist Paul Samuelson removed the hedonistic assumptions of utility maximization by arguing, as we have in the previous section, that utility maximization presupposes nothing more than transitivity and some harmless technical conditions akin to those specified above.
Rational does not imply self-interested. There is nothing irrational about caring for others, believing in fairness, or sacrificing for a social ideal. Nor do such preferences contradict decision theory. For instance, suppose a man with $100 is considering how much to consume himself and how much to give to charity. Suppose he faces a tax or subsidy such that for each $1 he contributes to charity, he is obliged to pay p dollars. Thus, p > 1 represents a tax, while 0 <p<1 represents a subsidy. We can then treat p as the price of a unit contribution to charity and model the individual as maximizing his utility for personal consumption x and contributions to charity y, say u(x, y) subject to the budget constraint x+py = 100. Clearly, it is perfectly rational for him to choose y>0. Indeed, Andreoni and Miller (2002) have shown that in making choices of this type, consumers behave in the same way as they do when choosing among personal consumption goods; i.e., they satisfy the generalized axiom of revealed preference.
Decision theory does not presuppose that the choices people make are welfare-improving. In fact, people are often slaves to such passions as smoking cigarettes, eating junk food, and engaging in unsafe sex. These behaviors in no way violate preference consistency.
If humans fail to behave as prescribed by decision theory, we need not conclude that they are irrational. In fact, they may simply be ignorant or misinformed. However, if human subjects consistently make intransitive choices over lotteries (e.g., §1.5.1), then either they do not satisfy the axioms of expected utility theory or they do not know how to evaluate lotteries. The latter is often called performance error. Performance error can be reduced or eliminated by formal instruction, so that the experts that society relies upon to make efficient decisions may behave quite rationally even in cases where the average individual violates preference consistency.
(Continues…)Excerpted from The Bounds of Reason by Herbert Gintis. Copyright © 2009 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
未经允许不得转载:Wow! eBook » The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences