check
Publications | The Federmann Center for the Study of Rationality

Publications

1994
Daniel Granot, Michael Maschler, Guillermo Owen, and Weiping R. Zhu. Kernel/Nucleolus Of A Standard Tree Game, The. Discussion Papers 1994: n. pag. Print.Abstract
In this paper we characterize the nucleolus (which coincides with the kernel) of a tree enterprise. We also provide a new algorithm to compute it, which sheds light on its structure. We show that in particular cases, including a chain enterprise one can compute the nucleolus in O(n) operations, where n is the number of vertices in the tree.
Dagan, Nir . New Characterizations Of Old Bankruptcy Rules. Discussion Papers 1994: n. pag. Print.Abstract
This paper presents axiomatic characterizations of two bankruptcy rules discussed in Jewish legal literature: the Constrained Equal Awards rule and the Contested Garment principle (the latter is defined only for two-creditor problems). A major property in these characterizations is independence of irrelevant claims, which requires that if an individual claim exceeds the total to be allocated it should be considered irrelevant.
Orshan, Gonni . Non-Symmetric Prekernels. Discussion Papers 1994. Web. Publisher's VersionAbstract
A "symmetry" property, either in the version of "equal treatment" or in the version of "anonymity", is one of the standard intuitively acceptable properties satisfied by most well known solution concepts in game theory. However,there are many instances where symmetry is counterintuitive. This paper analyzes non-symmetric prekenels: solution concepts that satisfy Peleg's axioms for the prekernel [1986, 1987], with equal treatment replaced by the requirement that the solution of each 2-person game consists of a unique point. It is shown that non-symmetric prekernels do exits and then a full characterization is provided.
Pitowsky, Itamar . On The Concept Of The Proof In Modern Mathematics. Discussion Papers 1994: n. pag. Print.Abstract
This paper deals with the attempts to characterize the set of all proofs in a given mathematical domain such as geometry or number theory. The characterization usually takes the form of a finite list of axiom schemata and inference rules, which is thought to be complete. A related effort, which originated with descartes, is to replace proofs - that is, reasoning about concepts and relations - by the solution of algebraic equations which are shown to be equivalent to the proofs. These formalist tendencies have always been opposed by intuitionists. I trace the dispute from descartes and Leibnitz through Kant all the way to its climax in the fifty years between the demonstration of the relative consistency of hyperbolic geometry and the discovery of Godel's theorems. My purpose is both historical and philosophical. On the historical level, I argue that Hilbert's program was not only a foundationalist effort to secure the consistency of mathematics. It was, in addition, an internal mathematical program in the aforementioned cartesian tradition of replacing proofs by computations. The demise of Hilbert's philosophical pretensions brought considerable and unexpected success to the mathematical program: Godel's theorem, which shows how to replace proofs by computations in very extensive domains of mathematics, and, ultimately, the Davis-Robinson-Putnam-Matijacevic theorem, which demonstrates, roughly, that every proof in those domains is equivalent to a solution of an algebraic (i.e. polynomial) equation. The fact that the notion of proof in number theory is indefinitely extensible (by Godel's theorem) depends on a complete characterization of the concept of `computation' (the Church-Turing thesis). On the philosophical level, I argue that this dependence undermines some contemporary intuitionist claims (by Weyl and Dummett) which are based on Godel's results.
Dagan, Nir . On The Least Sacrifice Principle In Taxation. Discussion Papers 1994. Web. Publisher's VersionAbstract
Utilitarian philosophers and economists recommended that when applying taxation programs, government should minimize the sum total of sacrifice made by individuals. This paper presents a model and an axiom system of taxation policies, in which the Least Sacrifice Principle is derived. A key axiom in our characterization is self-consistency. Other relations between self-consistency and welfare maximization in our model and in other models are also discussed.
Agastya, Murali . Ordinality Of The Shapley Value. Discussion Papers 1994. Web. Publisher's VersionAbstract
In Roth (1977) it is argued that the Shapley value is the cardinal utility of playing a game and it inherits properties used to define the underlying game itself. Implicit in this statement is the assumption that the TU game is generated by allowing for lotteries over an underlying set of alternatives.However, often there is a single numeraire good that can generate a game. In such instances, it is unclear why the utility of playing a game is cardinal when the preferences for the underlying good are ordinal. This paper presents a framework in which the Shapley value emerges as the representation of a preference ordering over a set of games. This representation is unique only up to a positive monotone transformations thereby establishing the ordinality of the value.
Monderer, Sergiu Hart, and Dov. Potentials And Weighted Values Of Non-Atomic Games. Discussion Papers 1994. Web. Publisher's VersionAbstract
The "potential approach" to value theory for finite games was introduced by Hart and Mas-Colell (1989). Here this approach is extended to non-atomic games. On appropriate spaces of differentiable games there is a unique potential operator, that generates the Aumann and Shapley (1974) value. As a corollary we obtain the uniqueness of the Aumann - Shapley value on certain subspaces of games. Next, the potential approach is applied to the weighted case, leading to "weighted non-atomic values". It is further shown that the asymptotic weighted value is well-defined, and that it coincides with the weighted value generated by the potential.
Avishai Margalit, Menahem E. Yaari . Rationality And Comprehension. Discussion Papers 1994. Web. Publisher's VersionAbstract
Devising a theory of knowledge for interacting agents has been on many people's minds recently. A near consensus has emerged, that the appropriate framework is a multi-agent version of C.I. Lewis's system S5 or one of S5's standard weakenings. In this essay, it is argued that such a framework cannot possibly be adequate, if it is to capture the intricacies of genuine inter-agent epistemics. Introducing a notion of "comprehension" – knowledge which is non-sensory yet non-analytic – may possibly be a remedy.
Ma, Jinpeng . Stable Matchings And Rematching-Proof Equilibria In A Two-Sided Matching Market. Discussion Papers 1994. Web. Publisher's VersionAbstract
In this paper we introduce the notion of a rematching-proof equilibrium for a two-sided matching market to resolve Roth's open question: What kind of equilibria of the game induced by any stable mechanism with respect to misreported profiles produce matchings that are stable with respect to the true profile. We show that the outcome of a rematching-proof equilibrium is stable with respect to the true profile even though the equilibrium profile may contain misreported preferences. We show that a rematching-proof equilibrium exists. Moreover, we extend these two results to the strong equilibria. Furthermore, the Nash equilibria in Roth [11] are shown to be rematching-proof equilibria. The relation between the rematching-proof equilibria and the strong equilibria is discussed as well.
Balkenborg, Dieter . Strictness And Evolutionary Stability. Discussion Papers 1994. Web. Publisher's VersionAbstract
The notion of a strict equilibrium set is introduced as a natural extension of the notion of a strict equilibrium point. The evolutionary stable sets of a truly asymmetric contest are shown to be behaviorally equivalentto the strict equilibrium sets of an "agent representation" of the contest. Using variants of the replicator dynamic we provide dynamic characterizations of strict equilibrium sets. We do this both for truly asymmetric contests and for arbitrary normal form games modelling conflicts between several distinct species.
Perry, Jacob Glazer, and Motty. Virtual Implementation In Backwards Induction. Discussion Papers 1994. Web. Publisher's VersionAbstract
We examine a sequential mechanism which is a simple modification of the normal form mechanism introduced by Abreu and Matsushima (1992). We show that almost any social choice function can be virtually implemented via a finite sequential game of perfect information. The solution concept assumed is Subgame Perfect Equilibrium or Iterative Elimination of Strictly Dominated Strategies. In particular, any social choice function that is virtually implementable via the Abreu-Matsushima's mechanism is also virtually implementable by a sequential mechanism.
Winter, Eyal . Voting And Vetoing. Discussion Papers 1994. Web. Publisher's VersionAbstract
The consequences of veto power in committees is analyzed using the approach of non-cooperative bargaining theory. It is first shown that in equilibrium non-veto players do not share in the benefits gained by the decision making of the committee, i.e, in every equilibrium outcome of the bargaining game non-veto players earn zero. Some measures for reducing the excessive power of veto members in committees are analyzed. Specifically, we study the effects of imposing a deadline on negotiations and of expanding the committee by increasing the number of non-veto players.
1993
Sergiu Hart, Aviad Heifetz, and Dov Samet. 'Knowing Whether', 'Knowing That, ' And The Cardinality Of State Spaces. Discussion Papers 1993: n. pag. Print.Abstract
We introduce a new operator on information structures which we call `Knowing whether' as opposed to the standard knowledge operator which may be called `Knowing that'. The difference between these operators is simple. Saying that an agent knows that a certain event occurred implies that this event indeed occurred, while saying that the agent knows whether an event occurred does not imply that the event occurred. (Formally, knowing whether X means that either it is known that X occurred or it is known that X did not occur). We show that iterating `Knowing whether' operators of different agents has a remarkable property that iterations of `knowing that' do not have. When we generate a sequence of events, starting with a given event and then applying `Knowing that' or `not knowing that' to the previous event, then the events in this sequence may be, somewhat surprisingly, contradictory. In contrast, any sequence of this type, generated with `knowing whether' and `not knowing whether' is never contradictory. We use this property of the `knowing whether' operator to construct a simple and natural state space and information structures for two agents, such that: (1) any two states are distinct relative to some interactive knowledge of a fixed event, (2) the space has the cardinality of the continuum. This result - originally proved in a complicated manner by Aumann (1989) - demonstrates the usefulness of the `knowing whether' operator.
Nir Dagan, Roberto Serrano, and Oscar Volij. A Noncooperative View Of Consistent Bankruptcy Rules. Discussion Papers 1993. Web. Publisher's VersionAbstract
We introduce a game form that captures a non-cooperative dimension of the consistency property of bankruptcy rules. Any consistent and monotone rule is fully characterized by a bilateral principle and consistency. Like the consistency axiom, our game form, together with a bilateral principle, yields the respective consistent bankruptcy rule as a result of a unique outcome of subgame perfect equilibria. The result holds for a large class of consistent and monotone rules, including the Constrained Equal Award, the Proportional and many other well-known rules. Moreover, for a large class of rules, all the subgame perfect equilibria are coalition-proof.
Agastya, Murali . An Evolutionary Bargaining Model. Discussion Papers 1993: n. pag. Print.Abstract
Varying quantities of a single good can be produced using at least two and at most n factors of production. The problem of allocating the surplus is studied in a dynamic model with adaptive behavior. Representatives for the factors (referred to as players) make wage demands based on precedent and ignorant of each others utilities for this good. Necessary and sufficient conditions are provided under which the long-run equilibria coincide with the core allocations. Moreover, allowing for the possibility of mistakes by the players, it is shown that the unique limiting stochastically stable outcome maximizes the product of the players' utilities subject to being in the core of the technology.
Frank Thuijsman, Bezalel Peleg, Mor Amitai, and Avi Shmida. Automata, Matching And Foraging Behavior Of Bees. Discussion Papers 1993. Web. Publisher's VersionAbstract
In this paper we discuss two types of foraging strategies for bees. Each of these explicit strategies explains that in the environment of a monomorphic bee community the bees will distribute themselves over the available homogeneous nectar sources according to the Ideal Free Distribution. At the same time these strategies explain that in single-bee experimental settings a bee will match, by its number of visits, the nectar supply from the available sources (the Matching Law). Moreover, both strategies explain that in certain situations the bees may behave as if they are risk averse, i.e spend more time on the flower type with the lower variance in nectar supply.
Bicchieri, Cristina . Counterfactuals, Belief Changes, And Equilibrium Refinements. Discussion Papers 1993: n. pag. Print.Abstract
The literature on Nash equilibrium refinements provides several ways to check the stability of a Nash equilibrium against deviations from equilibrium play. Stability, however, is a function of how a deviation is being interpreted. An equilibrium that is stable under one interpretation may cease to be stable under another, but the refinement literature provides no general criterion to judge the plausibility of different interpretations of off-equilibrium play. This paper specifies a model of belief revision that minimizes the loss of useful information. When several interpretations are compatible with off-equilibrium play, the one that requires the least costly belief revision (in terms of informational value) will be chosen by the players. This model of belief revision generates a plausibility ranking of interpretations of deviations, hence it also provides a ranking of Nash equilibrium refinements.
Harel, Alon . Efficiency And Fairness In Criminal Law: The Case For A Criminal Law Doctrine Of Comparative Fault. Discussion Papers 1993: n. pag. Print.Abstract
Criminal law is traditionally described as directing its injunctions exclusively to actual or potential criminals. This article will argue that the traditional view is normatively unjustified both on efficiency and fairness grounds. To disregard the victim's conduct in determining the sanctions of criminals is both inefficient and unfair. It is inefficient because dismissing the behavior of the victim as irrelevant to the concerns of the criminal justice system does not provide optimal incentives for victims to take precautions against crime. It is unfair to disregard the victim's conduct because given the greater likelihood that careless potential victims (relative to cautious ones) will become actual victims of crime, the expected costs of protecting careless victims are higher than the expected costs of protecting cautious ones. Hence, under the current system, cautious victims are exploited for the sake of protecting careless ones. Both efficiency and fairness considerations suggest that criminal law should adopt a criminal law doctrine of comparative fault, under which criminals who act against careless victims would be exculpated or their punishment mitigated.
Budescu, Maya Bar-Hillel, and David. Elusive Wishful Thinking Effect, The. Discussion Papers 1993. Web. Publisher's VersionAbstract
We define a desirability effect as the inflation of the judged probability of desirable events and the diminution of the judged probability of undesirable events. A series of studies designed to detect this effect is reported. In the first four experiments, subjects were presented with visual stimuli (a grid matrix in two colors, or a jar containing beads in two colors), and asked to estimate the probability of drawing at random one of the colors. The estimated probabilities for a defined draw were not higher when the draw entailed a gain than when it entailed a loss. In the fifth and sixth experiment, subjects read short stories each describing two contestants competing for some desirable outcome (e.g., firms competing for a contract). Some judged the probability that A would win, others judged the desirability that A would win. Story elements which enhanced a contestant's desirability without having normative bearing on its winning probability did not cause the favored contestant to be judged more likely to win. Only when a contestant's desirability was enhanced by promising the subject a monetary prize contingent on that contestant's win was there some slight evidence for a desirability effect: contestants were judged more likely to win when the subject expected a prize if they won than when the subject expected a prize if the other contestant won. In the last experiment, subjects estimated the probability of an over-20 point weekly change in the Dow Jones average, and were promised monetary prizes contingent on such a change either occurring, or failing to occur. They were also given a monetary incentive for accuracy. Subjects who desired a large change did not judge it more likely to occur than subjects who desired a small change. We discuss the difficulty of obtaining a desirability effect on probabilities, and argue that apparently wishful thinking– in the form of optimistic cognitions – can occur without affecting the evaluation of evidence.
Antonelli, Cristina Bicchieri, and Gian Aldo. Game-Theoretic Axioms For Local Rationality And Bounded Knowledge. Discussion Papers 1993: n. pag. Print.Abstract
We present an axiomatic approach for a class of finite, extensive form games of perfect iformation that makes use of notions like "rationality at a node" and "knowledge at a node". We show that, in general, a theory that is sufficient to infer an equilibrium must be modular: for each subgame G' of a game G the theory of game G must contain just enough information about the subgame G' to infer an equilibrium for G'. This means, in general, that the level of knowledge relative to any subgame of G must not be the same as the level of knowledge relative to the original game G. We show that whenever the theory of the game is the same at each node, a deviation from equilibrium play forces a revision of the theory at later nodes. On the contrary, whenever a theory of the game is modular, a deviation from equilibrium play does not cause any revision of the theory of the game.