Publications

2001
Neyman, Jean-Francois Mertens, and Abraham. A Value On 'An. Discussion Papers 2001. Web. Publisher's VersionAbstract
We prove here the existence of a value (of norm 1) on the spaces 'NA and even 'AN, the closure in the variation distance of the linear space spanned by all games f o mu, where mu is a non-atomic, non-negative finitely additive measure of mass 1 and f a real-valued function on [0, 1] which satisfies a much weakened continuity at zero and one.
Jacob, Alon Harel, and Assaf. An Economic Rationale For The Legal Treatment Of Omissions In Tort Law. Discussion Papers 2001. Web. Publisher's VersionAbstract
This paper provides an economic justification for the exemption from liability for omissions and for the exceptions to this exemption. It interprets the differential treatment of acts and omissions in tort law as a proxy for a more fundamental distinction between harms caused by multiple injurers each of whom can single-handedly prevent the harm (either by acting or failing to act) and harms caused by a single injurer (either by acting or failing to act). Since the overall cost to which a group of injurers is exposed is constant, attributing liability to many injurers reduces the part each has to pay and consequently reduces one's incentives to take precautions. The broad exemption from liability for omissions is a way of carving a simple, practical rule to distinguish between the typical cases in which an agent can be easily selected and provided with sufficient incentives (typically, cases of acts) and cases in which there is a serious problem of dilution of liability (typically, cases of omissions). The exceptions to the rule exempting from responsibility for omissions are also explained in terms of efficiency. The imposition of liability for omissions depends on the ability to identify a salient agent, i.e., to single out one or few legally responsible agents and differentiate their role from that of others. Tort law designs three types of "salience rules." It either creates salience directly (by attributing liability to a single agent), or it can exploit salience created "naturally", or it can induce injurers to create salience voluntarily.
Winter, Bezalel Peleg, and Eyal. Constitutional Implementation. Discussion Papers 2001. Web. Publisher's VersionAbstract
We consider the problem of implementing a social choice correspondence H in Nash equilibrium when the constitution of the society is given by an effectivity function E. It is assumed that the effectivity function of H, E^H, is a sub-correspondence of E. We found necessary and efficient conditions for a game form Gamma to implement H (in Nash equilibria) and to satisfy, at the same time, that E^Gamma, the effectivity function of Gamma, is a sub-correspondence of E^H (which guarantees that Gamma is compatible with E). We also find sufficient conditions for the coincidence of the set of winning coalitions of E^Gamma and E^H, and for E^Gamma=E^H. All our results are sharp as is shown by suitable examples.
Pradeep Dubey, John Geanakoplos, and Martin Shubik. Default And Punishment In General Equilibrium. Discussion Papers 2001. Web. Publisher's VersionAbstract
We extend the standard model of general equilibrium with incomplete markets to allow for default and punishment. The equilibrating variables include expected delivery rates, along with the usual prices of assets and commodities. By reinterpreting the variables, our model encompasses a broad range of moral hazard, adverse-selection, and signalling phenomena (including the Akerloflemons model and Rothschild-Stiglitz insurance model) in a general equilibrium framework. We impose a condition on the expected delivery rates for untraded assets that is similar to the trembling hand refinements used in game theory. Despite earlier claims about the nonexistence of equilibrium with adverse selection, we show that equilibrium always exists, even with exclusivity constraints on asset sales, and transactions-liquidity costs or information-evaluation costs for asset trade. We show that more lenient punishment which encourages default may be Pareto improving because it allows for better risk spreading. We also show that default opens the door to a theory of endogenous assets.
Sudholter, Bezalel Peleg, and Peter. Dummy Paradox Of The Bargaining Set, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
By means of an example of a superadditive 0-normalled game, we show that the maximum payoff to a dummy in the bargaining set may decrease when the marginal contribution of the dummy to the grand coalition becomes positive.
Goren, Harel . Effect Of Out-Group Competition On Individual Behavior And Out-Group Perception In The Intergroup Prisoner's Dilemma (Ipd) Game, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
Hebrew University of Jerusalem students participated in two experiments of repeated play of the Intergroup Prisoners' Dilemma (IPD) game, which involves conflict of interests between two groups and, simultaneously, within each group. The experiments manipulated the level of competition exhibited by the out-group members (i.e., their level of contribution to their group's effort in the conflict). Consistent with the hypothesis that participants use strategies of reciprocal cooperation between groups, higher levels of out-group competition caused participants to increase their contribution and lower levels caused them to decrease it. In addition, participants had accurate recall of the contribution levels of out-group members, and they attributed motivations to out-group members in a manner that reflected their level of contribution. The nature of reciprocation with the out-group is discussed in light of both behavioral and cognitive data.
Haimanko, Pradeep Dubey, and Ori, B. Envy And The Optimality Of Tournaments. Discussion Papers 2001. Web. Publisher's VersionAbstract
We show that tournaments tend to outperform piece-rate contracts when there is sufficient envy among the agents.
Bracht, Hidehiko Ichimura, and Juergen. Estimation Of Learning Models On Experimental Game Data. Discussion Papers 2001. Web. Publisher's VersionAbstract
The objective of this paper is both to examine the performance and to show properties of statistical techniques used to estimate learning models on experimental game data. We consider a game with unique mixed strategy equilibrium. We discuss identification of a general learning model and its special cases, reinforcement and belief learning, and propose a paramaterization of the model. We conduct Monte Carlo simulations to evaluate the finite sample performance of two kinds of estimators of a learning model's parameters. Maximum likelihood estimators of period to period transitions and mean squared deviation estimators of the entire path of play. In addition, we investigate the performance of a log score estimator of the entire path of play and a mean squared deviation estimator of period to period transitions. Finally, we evaluate a mean squared estimator of the entire path of play with observed actions averaged over blocks, instead of behavioral strategies. We propose to estimate the learning model by maximum likelihood estimation as this method performs well on the sample size used in practice if enough cross sectional variation is observed.
Itzhak Venezia, Dan Galai, and Zur Shapira. Exclusive Vs. Independent Agents: A Separating Equilibrium Approach. Discussion Papers 2001. Web. Publisher's VersionAbstract
We provide a separating equilibrium explanation for the existence of the independent insurance agent system despite the potentially higher costs of this system compared to those of the exclusive agents system (or direct underwriting). A model is developed assuming asymmetric information between insurers and insureds; the formers do not know the riskiness of the latter. We also assume that the claims service provided by the independent agent system to its clients is superior to that offered by direct underwriting system, that is, insureds using the independent agent system are more likely to receive reimbursement of their claims. Competition compels the insurers to provide within their own system the best contract to the insured. It is shown that in equilibrium the safer insureds choose direct underwriting, whereas the riskier ones choose independent agents. The predictions of the model agree with previous research demonstrating that the independent agent system is costlier than direct underwriting. The present model suggests that this does not result from inefficiency but rather from self-selection. The empirical implication of this analysis is that, ceteris paribus, the incidence of claims made by clients of the independent agents system is higher than that of clients of direct underwriting. Implications for the co-existence of different distribution systems due to unbundling of services in other industries such as brokerage houses and the health care industry are discussed.
Klaus Abbink, Ron Darziv, Zohar Gilula Harel Goren Bernd Irlenbusch Arnon Keren Bettina Rockenbach Abdolkarim Sadrieh Reinhard Selten, and Shmuel Zamir. Fisherman's Problem: Exploring The Tension Between Cooperative And Non-Cooperative Concepts In A Simple Game, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
We introduce and experiment the Fisherman s Game in which the application of economic theory leads to four different benchmarks. Non-cooperative sequential rationality predicts one extreme outcome while the core (which coincides with the competitive market equilibrium) predicts the other extreme. Intermediate, disjoint outcomes are predicted by fairness utility models and the Shapley value. Non of the four benchmarks fully explains the observed behavior. However, since elements of both cooperative and non-cooperative game theory are crucial for organizing our data, we conclude that effort towards bridging the gap between the various concepts is a promising approach for future economic research.
Simon, Robert Samuel . Games Of Incomplete Information, Ergodic Theory, And The Measurability Of Bayesian Equilibria. Discussion Papers 2001. Web. Publisher's VersionAbstract
This paper discusses the difference between Harsanyi and Bayesian equilibria for games of incomplete information played onuncountable belief spaces. A conjecture belonging to ergodic theory is presented. If the conjecture were valid then there would exist a game played on an uncountable belief space with a common prior for which there are Bayesian equilibria but no Harsanyi equilibrium.
Yigal Attali, Maya Bar-Hillel . Guess Where: The Position Of Correct Answers In Multiple-Choice Test Items As A Psychometric Variable. Discussion Papers 2001. Web. Publisher's VersionAbstract
In this paper, we show that test makers and test takers have a strong and systematic tendency for hiding correct answers – or, respectively, for seeking them – in middle positions. In single, isolated questions, both prefer middle positions over extreme ones in a ratio of up to 3 or 4 to 1. Because test makers routinely, deliberately and excessively balance the answer key of operational tests, middle bias almost, though not quite, disappears in those keys. Examinees taking real tests also produce answer sequences that are more balanced than their single question tendencies, but to a lesser extent than the correct key. In a typical 4-choice test, about 55% of erroneous answers (which are the only answers whose position is determined by the test taker, not the test maker) are in the two central positions. We show that this bias is large enough to have real psychometric consequences, as questions with middle correct answers are easier and – what's more important – less discriminating than questions with extreme correct answers, a fact some of whose implications we explore.
Heifetz, Robert J. Aumann, and Aviad. Incomplete Information. Discussion Papers 2001. Web. Publisher's VersionAbstract
In interactive contexts such as games and economies, it is important to take account not only of what the players believe about substantive matters (such as payoffs), but also of what they believe about the beliefs of other players. Two different but equivalent ways of dealing with this matter, the semantic and the syntactic, are set forth. Canonical and universal semantic systems are then defined and constructed, and the concepts of common knowledge and common priors formulated and characterized. The last two sections discuss relations with Bayesian games of incomplete information and their applications, and with interactive epistemology - the theory of multi-agent knowledge and belief as formulated in mathematical logic.
Bornstein, Gary . Intergroup Prisoner's Dilemma Game As A Model Of Intergroup Conflict, The. Discussion Papers 2001. Web. Publisher's VersionAbstract
Intergroup conflicts are characterized by conflicts of interests within the competing groups as well. The intragroup conflict stems from a basic fact: while all group members are better off if they all cooperate in competing against the outgroup, each individual group member is better off defecting. The Intergroup Prisoner's Dilemma (IPD) game is proposed as a theoretical framework for combining the intragroup and intergroup levels of conflict. This framework is used to examine major issues concerning individual and group behavior in intergroup conflict. These include: the effect of real intergroup conflict on intragroup cooperation; the motivational basis of cooperation; the distinction between non-cooperative groups, unitary groups, and individuals; and alternative routes to conflict resolution.
Kalai, Gil . Learnability And Rationality Of Choice. Discussion Papers 2001. Web. Publisher's VersionAbstract
The purpose of this paper is to examine the extent to which the concepts of individual and collective choice unsed in economic theory desribe "predictable" or "learnable" behavior. Given a set X of N alternatives, a choice function c is a mapping which assigns to nonempty subsets S of X an element c(S) of S. A rational choice function is one for which there is a linear ordering on the alternatives such that c(S) is the maximal element of S according to that ordering. Using the basic concept of PAC-learnability from statistical learning theory we define a class of choice functions on a ground set of N elements as learnable if it is possible to predict, with small amount of error, the chosen element from a set A after viewing a "few examples." Here, "few" means a polynomial number in N. Learnability is quite a strict condition on a class of choice functions. The main point we discuss in this regard are: The class of rational choice function can be learned quickly and efficiently. Various natural classes of choice functions, which represent indidivual choices and strategic choices of several interacting agents, are learnable. The class of rational choice functions has superior learnability properties in comparison to other classes. We make the conjecture that classes of choice functions that represent a genuine aggregation of individual choices in a large society are never learnable. We also ask to what extent learnability can replace or reinforce the rationality hypothesis in some economic situations.
Samuel-Cahn, Ruma Falk, and Ester. Lewis Carroll's Obtuse Problem. Discussion Papers 2001. Web. Publisher's VersionAbstract
Carroll's apparently impeccable solution to one of his probability problems is shown to answer another problem that is based on reasonable assumptions. His original assumptions, however, are self-contradictory, hence entailing paradoxical results.
Simon, Robert Samuel . Locally Finite Knowledge Structures. Discussion Papers 2001. Web. Publisher's VersionAbstract
With respect to the S5 multi-agent epistemic logic, we define a cell to be a minimal subset of knowledge structures known in common semantically by all the agents. A cell has finite fanout if at every knowledge strcuture every agent considers only a finite number of other knowledge structures to be possible. A set of formulas in common knowledge is finitely generated if the common knowledge of some finite subset implies the common knowledge of the whole set. For every finitely generated set of formulas held in common knowledge at some knowledge structure either this set determines uniquely a finite cell or there are uncountable many cells of finite fanout (and also uncountably many cells of uncountable size) at which exactly this set of formulas is known in common. The situation is very different, however, for sets of formulas held in common knolwedge that are not finitely generated - if there are uncountably many corresponding cells then either none of these cells or all of them could have finite fanout.
Vulkan, Zvika Neeman, and Nir. Markets Versus Negotiations: The Emergence Of Centralized Markets. Discussion Papers 2001. Web. Publisher's VersionAbstract
We study the incentives of privately informed traders who have access to two forms of trade: direct negotiations with a small number of buyers and sellers (or decentralized trade), and centralized markets with a relatively large number of buyers and sellers. We show that "weak" trader types (that is, buyers with a high willingness to pay and sellers with low costs) will prefer to trade through centralized markets. This leads to a complete unraveling of direct negotiations, so that ultimately, all "serious" buyers and sellers opt for trading through the centralized market. Once this happens, no trader can pro''¯tably trade through direct negotiations.
Bezalel Peleg, Hans Peters, and Ton Storcken. Nash Consistent Representation Of Constitutions: A Reaction To The Gibbard Paradox. Discussion Papers 2001. Web. Publisher's VersionAbstract
The concept of an effectivity function is adopted as a formal model of a constitution. A game form models the actions available and permissible to individuals in a society. As a representation of the constitution such a game form should endow each group in society with the same power as it has under the constitution. Another desirable property is Nash consistency of the game form: Whatever the individual preferences, the resulting game should be minimally stable in the sense of possessing a Nash equilibrium. A first main result of the paper is a characterization of all effectivity functions that have a Nash consistent representation for the case without special structure on the sent of alternatives (social states). Next, a similar result is derived for the case where the set of alternatives is a compact metric space and the effectivity function is topological. As a sepcial case, veto functions are considered. Further results concern Pareto optimality of Nash equilibrium outcomes.
Simon, Robert Samuel . On The Unique Extensibility And Surjectivity Of Knowledge Structures. Discussion Papers 2001. Web. Publisher's VersionAbstract
With the S5 multi-agent epistemic logic we consider the canonical maps from Krpke structures to knowledge structures. We define a cell to be a minimal subset of knowledge structures known in common semantically by the agents. A cell has finite fanout if at every point every agent considers only a finite number of other points to be possible. We define a cell to be surjective if every Kripke structure that maps to it does so surjectively. All cells with finite fanout are surjective, but the converse does not hold. To construct a counter-example we need topological insights concerning the relationship between the logic and its semantic models. The difference between syntactic and semantic common knowledge is central to this construction.