2001
Winter, E. . (2001).
Scapegoats and Optimal Allocation of Responsibility.
Discussion Papers. presented at the 8. Retrieved from
/files/dp266.pdf Publisher's VersionAbstractWe consider a model of hierarchical organizations in which agents have the option ofreducing the probability of failure by investing towards their decisions. A mechanismspecifies a distribution of sanctions in case of failure across the levels of the hierarchy. Itis said to be investment-inducing if it induces all agents to invest in equilibrium. It is saidto be optimal if it does so at minimal total punishment. We characterize optimalinvestment-inducing mechanisms in several versions of our benchmark model. Inparticular we refer to the problem of allocating individuals with diverse qualifications todifferent levels of the hierarchy as well as allocating tasks of different importance acrossdifferent hierarchy levels. We also address the issue of incentive-optimal hierarchyarchitectures.
Maya Bar-Hillel, Y. A. . (2001).
Seek Whence: Answer Sequences and Their Consequences in Key-Balanced Multiple-Choice Tests.
Discussion Papers. presented at the 6, The American Statistician 56 (2002), 299-303. Retrieved from
/files/dp252.pdf Publisher's VersionAbstractThe professional producers of such wide-spread high-stakes tests as the SAT have a policy of balancing, rather than randomizing, the answer keys of their tests. Randomization yields answer keys that are, on average, balanced, whereas a policy of deliberate balancing assures this desirable feature not just on average, but in every test. This policy is a well-kept trade secret, and apparently has been successfully kept as such, since there is no evidence of any awareness on the part of test takers and the coaches that serve them that this is an exploitable feature of answer keys. However, balancing leaves an identifiable signature on answer keys, thus not only jeopardizing the secret, but also creating the opportunity for its exploitation. The present paper presents the evidence for key balancing, the traces this practice leaves in answer keys, and the ways in which testwise test takers can exploit them. We estimate that such test takers can add between 10 and 16 points to their final SAT score, on average, depending on their knowledge level. The secret now being out of the closet, the time has come for test makers to do the right thing, namely to randomize, not balance, their answer keys.'Following the link to the published version ofdp252, an earlier, but fuller,'version'is included.'
Geanakoplos, P. D., & John, . (2001).
Signalling and Default: Rothschild-Stiglitz Reconsidered.
Discussion Papers. presented at the 5, The Quarterly Journal of Economics 117 (2002), 1529-1570. Retrieved from
/files/dp242.pdf Publisher's VersionAbstractIn our previous paper we built a general equilibrium model of default and punishment in which equilibrium always exists and endogenously determines asset promises, penalties, and sales constraints. In this paper we interpret the endogenous sales constraints as equilibrium signals. By specializing the default penalties and imposing an exclusivity constraint on asset sales, we obtain a perfectly competitive version of the Rothschild-Stiglitz model of insurance. In our model their separating equilibrium always exists even when they say it doesn't.
Neyman, A. . (2001).
Singular Games in Bv'NA.
Discussion Papers. presented at the 8. Retrieved from
/files/dp262.pdf Publisher's VersionAbstractEvery simple monotonic game in bv'NA is a weighted majority game. Every game v in bv'NA has a representation v=u+sum_i=1^inftyf_i o mu_i where u in pNA, mu_i in NA^1 and f_i is a sequence of bv' functions with sum_i=1^infty||f_i||
Kalai, G. . (2001).
Social Choice and Threshold Phenomena.
Discussion Papers. presented at the 11. Retrieved from
/files/dp279.pdf Publisher's VersionAbstractArrow's theorem asserts that under certain conditions every non-dictatorial social choice function leads to nonrational social choice for some profiles. In other words, for the case of non-dictatorial social choice if we observe that the society prefers alternative A over B and alternative B over C we cannot deduce what its choice will be between B and C. Here we ask whether we can deduce anything from observing a sample of the society's choices on the society's choice in other cases? We prove that the answer is ``no' for large societies for neutral and monotonic social choice function such that the society's choice is not typically determined by the choices of a few individuals. The proof is based on threshold properties of Boolean functions and on analysis of the social choice under some probabilistic assumptions on the profiles. A similar argument shows that under the same conditions for the social choice function but under certain other probabilistic assumptions on the profiles the social choice function will typically lead to rational choice for the society.
Winter, I. M., & Eyal, . (2001).
Stability and Segregation in Group Formation.
Discussion Papers. presented at the 8, Games and Economic Behavior 38 (2002), 318-346. Retrieved from
/files/dp263.pdf Publisher's VersionAbstractThis paper presents a model of group formation based on the assumption that individuals prefer to associate with people similar to them. It is shown that, in general, if the number of groups that can be formed is bounded, then a stable partition of the society into groups may not exist. A partition is defined as stable if none of the individuals would prefer be in a different group than the one he is in. However, if individuals characteristics are one-dimensional, then a stable partition always exists. We give sufficient conditions for stable partitions to be segregating (in the sense that, for example, low-characteristic individuals are in one group and high-characteristic ones are in another) and Pareto efficient. In addition, we propose a dynamic model of individual myopic behavior describing the evolution of group formation to an eventual stable, segregating, and Pareto efficient partition.
Peleg, H. K., & Bezalel, . (2001).
Stable Voting Procedures for Committees in Economic Environments.
Discussion Papers. presented at the 6, Journal of Mathematical Economics 30 (2001), 117-140. Retrieved from
/files/dp246.pdf Publisher's VersionAbstractA strong representation of a committee, formalized as a simple game, on a convex and closed set of alternatives is a game form with the members of the committee as players such that (i) the winning coalitions of the simple game are exactly those coalitions, which can get any given alternative independent of the strategies of the complement, and (ii) for any profile of continuous and convex preferences, the resulting game has a strong Nash equilibrium. In the paper, it is investigated whether committees have representations on convex and compact subsets of R^m. This is shown ot be the case if there are vetoers; for committees with no vetoers the existence of strong representations depends on the structure of the alternative set as well as on that of the committee (its Nakamura-number). Thus, if A is strictly convex, compact and has smooth boundary, then no committee can have a strong representation on A. On the other hand, if A has non-smooth boundary, representations may exist depending on the Nakamura-number (if it is at least 7).
Winter, S. M., & Eyal, . (2001).
Subscription Mechanisms for Network Formation.
Discussion Papers. presented at the 8, Journal of Economic Theory 106 (2002), 242-264. Retrieved from
/files/ Eyal264.pdf Publisher's VersionAbstractWe analyze a model of network formation where the costs of link formation are publicly known but individual benefits are not known to the social planner. The objective is to design a simple mechanism ensuring efficiency, budget balance and equity. We propose two mechanisms towards this end; the first ensures efficiency and budget balance but not equity. The second mechanism corrects the asymmetry in payoffs through a two-stage variant of the first mechanism. We also discuss an extension of the basic model to cover the case of directed graphs and give conditions under which the proposed mechanisms are immune to coalitional deviations.
Segal, U. P., & Uzi, . (2001).
Super Majoritarianism and the Endowment Effect.
Discussion Papers. presented at the 11, Theory and Decision 55 (2003), 181-207. Retrieved from
/files/dp277.pdf Publisher's VersionAbstractThe American and some other constitutions entrench property rights by requiring super majoritarian voting as a condition for amending or revoking their own provisions. Following Buchanan and Tullock [5], this paper analyzes individuals' interests behind a veil of ignorance, and shows that under some standard assumptions, a (simple) majoritarian rule should be adopted. This result changes if one assumes that preferences are consistent with the behavioral phenomenon known as the "endowment effect." It then follows that (at least some) property rights are best defended by super majoritarian protection. The paper then shows that its theoretical results are consistent with a number of doctrines underlying American Constitutional Law.
Nir Dagan, O. V., & Winter, E. . (2001).
Time-Preference Nash Solution, The.
Discussion Papers. presented at the 8. Retrieved from
/files/dp265.pdf Publisher's VersionAbstractThe primitives of a bargaining problem consist of a set, S, of feasible utility pairs and a disagree- ment point in it. The idea is that the set S is induced by an underlying set of physical outcomes which, for the purposes of the analysis, can be abstracted away. In a very influential paper Nash (1950) gives an axiomatic characterization of what is now the widely known Nash bargaining solution. Rubinstein, Safra, and Thomson (1992) (RST in the sequel) recast the bargaining problem into the underlying set of physical alternatives and give an axiomatization of what is known as the ordinal Nash bargaining solution. This solution has a very natural interpretation and has the interesting property that when risk preferences satisfy the expected utility axioms, it induces the standard Nash bargaining solution of the induced bargaining problem. This property justifies the proper name in the solution s appellation. The purpose of this paper is to give an axiomatic characterization of the rule that assigns the time-preference Nash outcome to each bargaining problem.
Ullmann-Margalit, E. . (2001).
Trust, Distrust, and in Between.
Discussion Papers. presented at the 9, In Russell Hardin (ed.), Distrust, New York: Russell Sage Publications, 2004, 60-82. Retrieved from
/files/dp269.pdf Publisher's VersionAbstractThe springboard for this paper is the nature of the negation relation between the notions of trust and distrust. In order to explore this relation, an analysis of full trust is offered. An investigation follows of the ways in which this "end-concept" of full trust can be negated. In particular, the sense in which distrust is the negation of trust is focused on. An asymmetry is pointed to, between 'not-to-trust' and 'not-to-distrust'. This asymmetry helps explain the existence of a gap between trust and distrust: the possibility of being suspended between the two. Since both trust and distrust require reasons, the question that relates to this gap is what if there are no reasons, or at any rate no sufficient reasons, either way. This kind of situation, of being suspended between two poles without a sufficient reason to opt for any one of them, paradigmatically calls for a presumption. In the case in hand this means a call for either a rebuttable presumption in favor of trust or a rebuttable presumption in favor of distrust. In some of the literature on trust it seems to be taken almost for granted that generalized distrust is justifiable in a way that generalized trust is not. This would seem to suggest a straightforward recommendation for the presumption of distrust over the presumption of trust. Doubts are raised whether indeed it is justified to adopt this as a default presumption. The notion of soft distrust, which is introduced at this point as contrasted with hard distrust, contributes in a significant way to these doubts. The analysis offered throughout the paper is of individual and personal trust and distrust. As it stands, it would seem not to be directly applicable to the case of trusting or distrusting institutions (like the court or the police). The question is therefore raised, in the final section, whether and how the analysis of individual trust and distrust can be extended to institutional trust and distrust. A case is made that there is asymmetry here too: while it is a misnomer to talk of trusting institutions, talk of distrusting institutions is not.
Haimanko, P. D., & Ori, B., . (2001).
Unilateral Deviations with Perfect Information.
Discussion Papers. presented at the 6. Retrieved from
/files/dp249.pdf Publisher's VersionAbstractFor extensive form games with perfect information, consider a learning process in which, at any iteration, each player unilaterally deviates to a best response to his current conjectures of others' strategies; and then updates his conjectures in accordance with the induced play of the game. We show that, for generic payoffs, the outcome of the game becomes stationary in finite time, and is consistent with Nash equilibrium. In general, if payoffs have ties or if players observe more of each others' strategies than is revealed by plays of the game, the same result holds provided a rationality constraints is imposed on unilateral deviations: no player changes his moves in subgames that he deems unreachable, unless he stands to improve his payoff there. Moreover, with this constraint, the sequence of strategies and conjectures also becomes stationary and yields a self-confirming equilibrium.
Hon-Snir, S. . (2001).
Utility Equivalence in Auctions.
Discussion Papers. presented at the 7. Retrieved from
/files/dp257.pdf Publisher's VersionAbstractAuctions are considered with a (non-symmetric) independent-private-value model of valuations. It shall be demonstrated that a utility equivalence principle holds for an agent if and only if such agent has a constant absolute risk-attitude.
Neyman, A. . (2001).
Values of Games with Infinitely Many Players.
Discussion Papers. presented at the 6, Handbook of Game Theory, with Economic Applications, Vol. III, R. J. Aumann and S. Hart (eds.), Elsevier North-Holland (2002), 2121-2167. Retrieved from
/files/dp247.pdf Publisher's VersionAbstractThe Shapley value is one of the basic solution concepts of cooperative gaem theory. It can be viewed as a sort of average or expected outcome, or as an a priori evaluation of the players' expected payoffs. The value has a very wide range of applications, particularly in economics and political science (see chapters 32, 33 and 34 in this Handbook). In many of these applications it is necessary to consider games that involve a large number of players. Often most of the players are individually insignificant, and are effective in the game only via coalitions. At the same time there may exist big players who retain the power to wield single-handed influence. A typical example is provided by voting among stockholders of a corporation, with a few major stockholders and an "ocean" of minor stockholders. In economics, one considers an oligopolistic sector of firms embedded in a large population of "perfectly competitive" consumers. In all of these cases, it is fruitful to model the game as one with a continuum of players. In general, the continuum consists of a non-atomic part (the "ocean"), along with (at most countably many) atoms. The continuum provides a convenient framework for mathematical analysis, and approximates the results for large finite games well. Also, it enables a unified view of games with finite, countable or oceanic player-sets, or indeed any mixture of these.
Hart, S. . (2001).
Values of Perfectly Competitive Economies.
Discussion Papers. presented at the 1, In R. J. Aumann & S. Hart (eds.) Handbook of Game Theory, with Economic Applications. (2002) Vol. III, Ch. 57, Elsevier/North-Holland. Retrieved from
/files/ val-hgt.html Publisher's VersionAbstractThis chapter is devoted to the study of economic models with many agents, each of whom is relatively insignificant. These are referred to as perfectly competitive models. The basic economic concept for such models is the competitive (or Walrasian) equilibrium, which prescribes prices that make the total demand equal to the total supply, i.e., under which the "markets clear." The fact that each agent is negligible implies that he cannot singly affect the prices, and so he takes them as given when finding his optimal consumption - "demand." The chapter is organized as follows: Section 2 presents the basic model of an exchange economy with a continuum of agents, together with the definitions of the appropriate concepts. The Value Principle results are stated in Section 3. An informal (and hopefully instructive) proof of the Value Equivalence Theorem is provided in Section 4. Section 5 is devoted to additional material, generalizations, extensions and alternative approaches.
Wu, P. D., & Chien-Wei, . (2001).
When Less Competition Induces More Product Innovation.
Discussion Papers. presented at the 6, Economics Letters 74 (2002), 309-312. Retrieved from
/files/dp255.pdf Publisher's VersionAbstractConsider firms which engage in Cournot competition over a common product, but can undertake innovation to improve the quality of their product. In this scenario it can often happen that innovation is discouraged by too much or too little competition, and occurs only when the industry is of intermediate size.
Sorin, A. N., & Sylvain, . (2001).
Zero-Sum Two-Person Repeated Games with Public Uncertain Duration Process.
Discussion Papers. presented at the 7. Retrieved from
/files/dp259.pdf Publisher's VersionAbstractWe consider repeated two-person zero-sum games where the number of repetitions theta is unknown. The information about the uncertain duration is identical to both players and can change during the play of the game. This is described by an uncertain duration process Theta. To each repeated game Gamma and uncertain duration process Theta is associated the Theta repeated game Gamma_Theta with value V_Theta. We establish a recursive formula for the value V_Theta. We study asymptotic properties of the value v_Theta=V_Theta/E(theta) as the expected duration E(theta) goes to infinity. We extend and unify several asymptotic results on the existence of lim v_n and lim v_lambda and their equality to lim v_Theta. This analysis applies in particular to stochastic games and repeated games of incomplete information.