2001
Neyman, A. . (2001).
Singular Games in Bv'NA.
Discussion Papers. presented at the 8. Retrieved from
/files/dp262.pdf Publisher's VersionAbstractEvery simple monotonic game in bv'NA is a weighted majority game. Every game v in bv'NA has a representation v=u+sum_i=1^inftyf_i o mu_i where u in pNA, mu_i in NA^1 and f_i is a sequence of bv' functions with sum_i=1^infty||f_i||
Kalai, G. . (2001).
Social Choice and Threshold Phenomena.
Discussion Papers. presented at the 11. Retrieved from
/files/dp279.pdf Publisher's VersionAbstractArrow's theorem asserts that under certain conditions every non-dictatorial social choice function leads to nonrational social choice for some profiles. In other words, for the case of non-dictatorial social choice if we observe that the society prefers alternative A over B and alternative B over C we cannot deduce what its choice will be between B and C. Here we ask whether we can deduce anything from observing a sample of the society's choices on the society's choice in other cases? We prove that the answer is ``no' for large societies for neutral and monotonic social choice function such that the society's choice is not typically determined by the choices of a few individuals. The proof is based on threshold properties of Boolean functions and on analysis of the social choice under some probabilistic assumptions on the profiles. A similar argument shows that under the same conditions for the social choice function but under certain other probabilistic assumptions on the profiles the social choice function will typically lead to rational choice for the society.
Winter, I. M., & Eyal, . (2001).
Stability and Segregation in Group Formation.
Discussion Papers. presented at the 8, Games and Economic Behavior 38 (2002), 318-346. Retrieved from
/files/dp263.pdf Publisher's VersionAbstractThis paper presents a model of group formation based on the assumption that individuals prefer to associate with people similar to them. It is shown that, in general, if the number of groups that can be formed is bounded, then a stable partition of the society into groups may not exist. A partition is defined as stable if none of the individuals would prefer be in a different group than the one he is in. However, if individuals characteristics are one-dimensional, then a stable partition always exists. We give sufficient conditions for stable partitions to be segregating (in the sense that, for example, low-characteristic individuals are in one group and high-characteristic ones are in another) and Pareto efficient. In addition, we propose a dynamic model of individual myopic behavior describing the evolution of group formation to an eventual stable, segregating, and Pareto efficient partition.
Peleg, H. K., & Bezalel, . (2001).
Stable Voting Procedures for Committees in Economic Environments.
Discussion Papers. presented at the 6, Journal of Mathematical Economics 30 (2001), 117-140. Retrieved from
/files/dp246.pdf Publisher's VersionAbstractA strong representation of a committee, formalized as a simple game, on a convex and closed set of alternatives is a game form with the members of the committee as players such that (i) the winning coalitions of the simple game are exactly those coalitions, which can get any given alternative independent of the strategies of the complement, and (ii) for any profile of continuous and convex preferences, the resulting game has a strong Nash equilibrium. In the paper, it is investigated whether committees have representations on convex and compact subsets of R^m. This is shown ot be the case if there are vetoers; for committees with no vetoers the existence of strong representations depends on the structure of the alternative set as well as on that of the committee (its Nakamura-number). Thus, if A is strictly convex, compact and has smooth boundary, then no committee can have a strong representation on A. On the other hand, if A has non-smooth boundary, representations may exist depending on the Nakamura-number (if it is at least 7).
Winter, S. M., & Eyal, . (2001).
Subscription Mechanisms for Network Formation.
Discussion Papers. presented at the 8, Journal of Economic Theory 106 (2002), 242-264. Retrieved from
/files/ Eyal264.pdf Publisher's VersionAbstractWe analyze a model of network formation where the costs of link formation are publicly known but individual benefits are not known to the social planner. The objective is to design a simple mechanism ensuring efficiency, budget balance and equity. We propose two mechanisms towards this end; the first ensures efficiency and budget balance but not equity. The second mechanism corrects the asymmetry in payoffs through a two-stage variant of the first mechanism. We also discuss an extension of the basic model to cover the case of directed graphs and give conditions under which the proposed mechanisms are immune to coalitional deviations.
Segal, U. P., & Uzi, . (2001).
Super Majoritarianism and the Endowment Effect.
Discussion Papers. presented at the 11, Theory and Decision 55 (2003), 181-207. Retrieved from
/files/dp277.pdf Publisher's VersionAbstractThe American and some other constitutions entrench property rights by requiring super majoritarian voting as a condition for amending or revoking their own provisions. Following Buchanan and Tullock [5], this paper analyzes individuals' interests behind a veil of ignorance, and shows that under some standard assumptions, a (simple) majoritarian rule should be adopted. This result changes if one assumes that preferences are consistent with the behavioral phenomenon known as the "endowment effect." It then follows that (at least some) property rights are best defended by super majoritarian protection. The paper then shows that its theoretical results are consistent with a number of doctrines underlying American Constitutional Law.
Nir Dagan, O. V., & Winter, E. . (2001).
Time-Preference Nash Solution, The.
Discussion Papers. presented at the 8. Retrieved from
/files/dp265.pdf Publisher's VersionAbstractThe primitives of a bargaining problem consist of a set, S, of feasible utility pairs and a disagree- ment point in it. The idea is that the set S is induced by an underlying set of physical outcomes which, for the purposes of the analysis, can be abstracted away. In a very influential paper Nash (1950) gives an axiomatic characterization of what is now the widely known Nash bargaining solution. Rubinstein, Safra, and Thomson (1992) (RST in the sequel) recast the bargaining problem into the underlying set of physical alternatives and give an axiomatization of what is known as the ordinal Nash bargaining solution. This solution has a very natural interpretation and has the interesting property that when risk preferences satisfy the expected utility axioms, it induces the standard Nash bargaining solution of the induced bargaining problem. This property justifies the proper name in the solution s appellation. The purpose of this paper is to give an axiomatic characterization of the rule that assigns the time-preference Nash outcome to each bargaining problem.
Ullmann-Margalit, E. . (2001).
Trust, Distrust, and in Between.
Discussion Papers. presented at the 9, In Russell Hardin (ed.), Distrust, New York: Russell Sage Publications, 2004, 60-82. Retrieved from
/files/dp269.pdf Publisher's VersionAbstractThe springboard for this paper is the nature of the negation relation between the notions of trust and distrust. In order to explore this relation, an analysis of full trust is offered. An investigation follows of the ways in which this "end-concept" of full trust can be negated. In particular, the sense in which distrust is the negation of trust is focused on. An asymmetry is pointed to, between 'not-to-trust' and 'not-to-distrust'. This asymmetry helps explain the existence of a gap between trust and distrust: the possibility of being suspended between the two. Since both trust and distrust require reasons, the question that relates to this gap is what if there are no reasons, or at any rate no sufficient reasons, either way. This kind of situation, of being suspended between two poles without a sufficient reason to opt for any one of them, paradigmatically calls for a presumption. In the case in hand this means a call for either a rebuttable presumption in favor of trust or a rebuttable presumption in favor of distrust. In some of the literature on trust it seems to be taken almost for granted that generalized distrust is justifiable in a way that generalized trust is not. This would seem to suggest a straightforward recommendation for the presumption of distrust over the presumption of trust. Doubts are raised whether indeed it is justified to adopt this as a default presumption. The notion of soft distrust, which is introduced at this point as contrasted with hard distrust, contributes in a significant way to these doubts. The analysis offered throughout the paper is of individual and personal trust and distrust. As it stands, it would seem not to be directly applicable to the case of trusting or distrusting institutions (like the court or the police). The question is therefore raised, in the final section, whether and how the analysis of individual trust and distrust can be extended to institutional trust and distrust. A case is made that there is asymmetry here too: while it is a misnomer to talk of trusting institutions, talk of distrusting institutions is not.
Haimanko, P. D., & Ori, B., . (2001).
Unilateral Deviations with Perfect Information.
Discussion Papers. presented at the 6. Retrieved from
/files/dp249.pdf Publisher's VersionAbstractFor extensive form games with perfect information, consider a learning process in which, at any iteration, each player unilaterally deviates to a best response to his current conjectures of others' strategies; and then updates his conjectures in accordance with the induced play of the game. We show that, for generic payoffs, the outcome of the game becomes stationary in finite time, and is consistent with Nash equilibrium. In general, if payoffs have ties or if players observe more of each others' strategies than is revealed by plays of the game, the same result holds provided a rationality constraints is imposed on unilateral deviations: no player changes his moves in subgames that he deems unreachable, unless he stands to improve his payoff there. Moreover, with this constraint, the sequence of strategies and conjectures also becomes stationary and yields a self-confirming equilibrium.
Hon-Snir, S. . (2001).
Utility Equivalence in Auctions.
Discussion Papers. presented at the 7. Retrieved from
/files/dp257.pdf Publisher's VersionAbstractAuctions are considered with a (non-symmetric) independent-private-value model of valuations. It shall be demonstrated that a utility equivalence principle holds for an agent if and only if such agent has a constant absolute risk-attitude.
Neyman, A. . (2001).
Values of Games with Infinitely Many Players.
Discussion Papers. presented at the 6, Handbook of Game Theory, with Economic Applications, Vol. III, R. J. Aumann and S. Hart (eds.), Elsevier North-Holland (2002), 2121-2167. Retrieved from
/files/dp247.pdf Publisher's VersionAbstractThe Shapley value is one of the basic solution concepts of cooperative gaem theory. It can be viewed as a sort of average or expected outcome, or as an a priori evaluation of the players' expected payoffs. The value has a very wide range of applications, particularly in economics and political science (see chapters 32, 33 and 34 in this Handbook). In many of these applications it is necessary to consider games that involve a large number of players. Often most of the players are individually insignificant, and are effective in the game only via coalitions. At the same time there may exist big players who retain the power to wield single-handed influence. A typical example is provided by voting among stockholders of a corporation, with a few major stockholders and an "ocean" of minor stockholders. In economics, one considers an oligopolistic sector of firms embedded in a large population of "perfectly competitive" consumers. In all of these cases, it is fruitful to model the game as one with a continuum of players. In general, the continuum consists of a non-atomic part (the "ocean"), along with (at most countably many) atoms. The continuum provides a convenient framework for mathematical analysis, and approximates the results for large finite games well. Also, it enables a unified view of games with finite, countable or oceanic player-sets, or indeed any mixture of these.
Hart, S. . (2001).
Values of Perfectly Competitive Economies.
Discussion Papers. presented at the 1, In R. J. Aumann & S. Hart (eds.) Handbook of Game Theory, with Economic Applications. (2002) Vol. III, Ch. 57, Elsevier/North-Holland. Retrieved from
/files/ val-hgt.html Publisher's VersionAbstractThis chapter is devoted to the study of economic models with many agents, each of whom is relatively insignificant. These are referred to as perfectly competitive models. The basic economic concept for such models is the competitive (or Walrasian) equilibrium, which prescribes prices that make the total demand equal to the total supply, i.e., under which the "markets clear." The fact that each agent is negligible implies that he cannot singly affect the prices, and so he takes them as given when finding his optimal consumption - "demand." The chapter is organized as follows: Section 2 presents the basic model of an exchange economy with a continuum of agents, together with the definitions of the appropriate concepts. The Value Principle results are stated in Section 3. An informal (and hopefully instructive) proof of the Value Equivalence Theorem is provided in Section 4. Section 5 is devoted to additional material, generalizations, extensions and alternative approaches.
Wu, P. D., & Chien-Wei, . (2001).
When Less Competition Induces More Product Innovation.
Discussion Papers. presented at the 6, Economics Letters 74 (2002), 309-312. Retrieved from
/files/dp255.pdf Publisher's VersionAbstractConsider firms which engage in Cournot competition over a common product, but can undertake innovation to improve the quality of their product. In this scenario it can often happen that innovation is discouraged by too much or too little competition, and occurs only when the industry is of intermediate size.
Sorin, A. N., & Sylvain, . (2001).
Zero-Sum Two-Person Repeated Games with Public Uncertain Duration Process.
Discussion Papers. presented at the 7. Retrieved from
/files/dp259.pdf Publisher's VersionAbstractWe consider repeated two-person zero-sum games where the number of repetitions theta is unknown. The information about the uncertain duration is identical to both players and can change during the play of the game. This is described by an uncertain duration process Theta. To each repeated game Gamma and uncertain duration process Theta is associated the Theta repeated game Gamma_Theta with value V_Theta. We establish a recursive formula for the value V_Theta. We study asymptotic properties of the value v_Theta=V_Theta/E(theta) as the expected duration E(theta) goes to infinity. We extend and unify several asymptotic results on the existence of lim v_n and lim v_lambda and their equality to lim v_Theta. This analysis applies in particular to stochastic games and repeated games of incomplete information.
2000
Cohen, D. . (2000).
A Rational Basis for Irrational Beliefs and Behaviors.
Discussion Papers. presented at the 1. Retrieved from
' Publisher's VersionAbstractNo Abstract
Mas-Colell, S. H., & Andreu, . (2000).
A Reinforcement Procedure Leading to Correlated Equilibrium.
Discussion Papers. presented at the 8, G. Debreu, W. Neuefeind & W. Trockel (eds.), Economic Essays: A Festschrift for Werner Hildenbrand, Springer (2001), 181-200. Retrieved from
Publisher's VersionAbstractWe consider repeated games where at any period each player knows only his set of actions and the stream of payoffs that he has received in the past. He knows neither his own payoff function, nor the characteristics of the other players (how many there are, their strategies and payoffs). In this context, we present an adaptive procedure for play - called "modified-regret- matching" - which is interpretable as a stimulus-response or reinforcement procedure, and which has the property that any limit point of the empirical distribution of play is a correlated equilibrium of the stage game.
Schul, I. Y., & Yaacov, . (2000).
Acceptance and Elimination Procedures in Choice: Non-Complementarity and the Role of Implied Status Quo.
Discussion Papers. presented at the 2, Organizational Behavior and Human Decision Processes 82 (2000), 293-313. Retrieved from
/files/dp211.pdf Publisher's VersionAbstractThe present research contrasts two seemingly complementary decision strategies: acceptance and elimination. In acceptance, a choice set is created by including suitable alternatives from an initial set of alternatives, whereas in elimination it is created by removing inappropriate alternatives from that same initial set. The research used realistic career decision-making scenarios and presented to respondents sets of alternatives that varied in their pre-experimental strength values. Whereas complementarity of acceptance and elimination is implied by three standard (normative) assumptions of decision theory, we find a systematic discrepancy between the outcomes of these procedures: choice sets were larger in elimination than in acceptance. This acceptance/elimination discrepancy is directly tied to sub-complementarity. The central tenet of the theoretical framework developed here is that acceptance and elimination procedures imply different types of status quo for the alternatives, thereby invoking a different selection criterion for each procedure. A central prediction of the dual-criterion framework is the "middling" alternatives should be most susceptible to the type of procedure used. The present studies focus on this prediction which is substantiated by the results showing that "middling" alternatives yield the greatest discrepancy between acceptance and elimination. The implications of this model and findings for various research domains are discussed.
Kleinberger, I. Y., & Eli, . (2000).
Advice Taking in Decision Making: Egocentric Discounting and Reputation Formation.
Discussion Papers. presented at the 2, Organizational Behavior and Human Decision Processes 83 (2000), 260-281. Retrieved from
/files/dp212.pdf Publisher's VersionAbstractOur framework for understanding advice-taking in decision making rests on two theoretical concepts that motivate the studies and serve to explain the findings. The first is egocentric discounting of others' opinion and the second is reputation formation for advisors. We review the evidence for these concepts, trace their theoretical origins, and point out some of their implications. In three studies we measured decision makers' "weighting policy" for the advice, and in a fourth study, their "willingness to pay" for it. Briefly, we found that advice is discounted relative to own opinion, and reputation for advisors is rapidly formed and asymmetrically revised. The asymmetry implies that it may be easier for advisors to lose a good reputation than to gain it. The cognitive and social origins of these phenomena are considered.
Gorfine, R. N., & Malka, . (2000).
Analysing Data of Intergroup Prisoner's Dilemma Game.
Discussion Papers. presented at the 3. Retrieved from
/files/dp215.ps Publisher's VersionAbstractThe Intergroup Prisoner's Dilemma (IPD) game was suggested by Bornstein (1992) for modeling intergroup conflicts over continuous public goods. We analyze data of an experiment in which the IPD game was played for 150 rounds, under three matching conditions. The objective is to study differences in the investment patterns of players in the different groups. A repeated measures analysis (Goren & Bornstein, 1999) involved data aggregation and strong distributional assumptions. Here we introduce a non-parametric approach based on permutation tests, applied to the raw data. Two new measures, the cumulative investment and the normalized cumulative investment, provide additional insight into the differences between groups. The proposed tests, based on the area under the investment curves, identify overall and pairwise differences between groups. A simultaneous confidence band for the mean difference curve is used to detect games which account for pairwise differences.
Shapira, Z. . (2000).
Aspiration Levels and Risk Taking by Government Bond Traders.
Discussion Papers. presented at the 11. Retrieved from
/files/ zur227.pdf Publisher's VersionAbstractThe management of risk is important in financial institutions. In particular, investment houses dealing with volatile financial markets such as foreign exchange or government bonds may find it difficult ot maintain "proper" levels of risk taking. On one hand, firms encourage traders to take risks in trading government bonds, but on the other, they promote risk aversion since they value reputation as careful and solid investors rather than having a reputation of risk takers. Government bond traders work in a very volatile and fast moving market. They are compensated by a base salary plus a bonus which relates to the profit and loss (P&L) they create for the firm on the securities they trade. Recent models of risk taking (Kahneman and Tversky, 1979; March and Shapira, 1992; Shapira, 1995) suggest that risk taking is affected by the targets or reference points that people use to evaluate risky prospects. Such targets can be set by "objective" grounds, that is, based on some rational economic considerations of profitability. However, often the targets are set in a "comparative" sense, that is, by comparison to the performance of other similar firms. The above models suggest some alternative ways in which targets may affect risk taking. These predictions are tested using data on actual purchase and sell decision made by government bond traders. Implications for risk management are discussed.