Publications

2001
Ullmann-Margalit, E. . (2001). Trust, Distrust, and in Between. Discussion Papers. presented at the 9, In Russell Hardin (ed.), Distrust, New York: Russell Sage Publications, 2004, 60-82. Retrieved from /files/dp269.pdf Publisher's VersionAbstract
The springboard for this paper is the nature of the negation relation between the notions of trust and distrust. In order to explore this relation, an analysis of full trust is offered. An investigation follows of the ways in which this "end-concept" of full trust can be negated. In particular, the sense in which distrust is the negation of trust is focused on. An asymmetry is pointed to, between 'not-to-trust' and 'not-to-distrust'. This asymmetry helps explain the existence of a gap between trust and distrust: the possibility of being suspended between the two. Since both trust and distrust require reasons, the question that relates to this gap is what if there are no reasons, or at any rate no sufficient reasons, either way. This kind of situation, of being suspended between two poles without a sufficient reason to opt for any one of them, paradigmatically calls for a presumption. In the case in hand this means a call for either a rebuttable presumption in favor of trust or a rebuttable presumption in favor of distrust. In some of the literature on trust it seems to be taken almost for granted that generalized distrust is justifiable in a way that generalized trust is not. This would seem to suggest a straightforward recommendation for the presumption of distrust over the presumption of trust. Doubts are raised whether indeed it is justified to adopt this as a default presumption. The notion of soft distrust, which is introduced at this point as contrasted with hard distrust, contributes in a significant way to these doubts. The analysis offered throughout the paper is of individual and personal trust and distrust. As it stands, it would seem not to be directly applicable to the case of trusting or distrusting institutions (like the court or the police). The question is therefore raised, in the final section, whether and how the analysis of individual trust and distrust can be extended to institutional trust and distrust. A case is made that there is asymmetry here too: while it is a misnomer to talk of trusting institutions, talk of distrusting institutions is not.
Haimanko, P. D., & Ori, B., . (2001). Unilateral Deviations with Perfect Information. Discussion Papers. presented at the 6. Retrieved from /files/dp249.pdf Publisher's VersionAbstract
For extensive form games with perfect information, consider a learning process in which, at any iteration, each player unilaterally deviates to a best response to his current conjectures of others' strategies; and then updates his conjectures in accordance with the induced play of the game. We show that, for generic payoffs, the outcome of the game becomes stationary in finite time, and is consistent with Nash equilibrium. In general, if payoffs have ties or if players observe more of each others' strategies than is revealed by plays of the game, the same result holds provided a rationality constraints is imposed on unilateral deviations: no player changes his moves in subgames that he deems unreachable, unless he stands to improve his payoff there. Moreover, with this constraint, the sequence of strategies and conjectures also becomes stationary and yields a self-confirming equilibrium.
Hon-Snir, S. . (2001). Utility Equivalence in Auctions. Discussion Papers. presented at the 7. Retrieved from /files/dp257.pdf Publisher's VersionAbstract
Auctions are considered with a (non-symmetric) independent-private-value model of valuations. It shall be demonstrated that a utility equivalence principle holds for an agent if and only if such agent has a constant absolute risk-attitude.
Neyman, A. . (2001). Values of Games with Infinitely Many Players. Discussion Papers. presented at the 6, Handbook of Game Theory, with Economic Applications, Vol. III, R. J. Aumann and S. Hart (eds.), Elsevier North-Holland (2002), 2121-2167. Retrieved from /files/dp247.pdf Publisher's VersionAbstract
The Shapley value is one of the basic solution concepts of cooperative gaem theory. It can be viewed as a sort of average or expected outcome, or as an a priori evaluation of the players' expected payoffs. The value has a very wide range of applications, particularly in economics and political science (see chapters 32, 33 and 34 in this Handbook). In many of these applications it is necessary to consider games that involve a large number of players. Often most of the players are individually insignificant, and are effective in the game only via coalitions. At the same time there may exist big players who retain the power to wield single-handed influence. A typical example is provided by voting among stockholders of a corporation, with a few major stockholders and an "ocean" of minor stockholders. In economics, one considers an oligopolistic sector of firms embedded in a large population of "perfectly competitive" consumers. In all of these cases, it is fruitful to model the game as one with a continuum of players. In general, the continuum consists of a non-atomic part (the "ocean"), along with (at most countably many) atoms. The continuum provides a convenient framework for mathematical analysis, and approximates the results for large finite games well. Also, it enables a unified view of games with finite, countable or oceanic player-sets, or indeed any mixture of these.
Hart, S. . (2001). Values of Perfectly Competitive Economies. Discussion Papers. presented at the 1, In R. J. Aumann & S. Hart (eds.) Handbook of Game Theory, with Economic Applications. (2002) Vol. III, Ch. 57, Elsevier/North-Holland. Retrieved from /files/ val-hgt.html Publisher's VersionAbstract
This chapter is devoted to the study of economic models with many agents, each of whom is relatively insignificant. These are referred to as perfectly competitive models. The basic economic concept for such models is the competitive (or Walrasian) equilibrium, which prescribes prices that make the total demand equal to the total supply, i.e., under which the "markets clear." The fact that each agent is negligible implies that he cannot singly affect the prices, and so he takes them as given when finding his optimal consumption - "demand." The chapter is organized as follows: Section 2 presents the basic model of an exchange economy with a continuum of agents, together with the definitions of the appropriate concepts. The Value Principle results are stated in Section 3. An informal (and hopefully instructive) proof of the Value Equivalence Theorem is provided in Section 4. Section 5 is devoted to additional material, generalizations, extensions and alternative approaches.
Wu, P. D., & Chien-Wei, . (2001). When Less Competition Induces More Product Innovation. Discussion Papers. presented at the 6, Economics Letters 74 (2002), 309-312. Retrieved from /files/dp255.pdf Publisher's VersionAbstract
Consider firms which engage in Cournot competition over a common product, but can undertake innovation to improve the quality of their product. In this scenario it can often happen that innovation is discouraged by too much or too little competition, and occurs only when the industry is of intermediate size.
Sorin, A. N., & Sylvain, . (2001). Zero-Sum Two-Person Repeated Games with Public Uncertain Duration Process. Discussion Papers. presented at the 7. Retrieved from /files/dp259.pdf Publisher's VersionAbstract
We consider repeated two-person zero-sum games where the number of repetitions theta is unknown. The information about the uncertain duration is identical to both players and can change during the play of the game. This is described by an uncertain duration process Theta. To each repeated game Gamma and uncertain duration process Theta is associated the Theta repeated game Gamma_Theta with value V_Theta. We establish a recursive formula for the value V_Theta. We study asymptotic properties of the value v_Theta=V_Theta/E(theta) as the expected duration E(theta) goes to infinity. We extend and unify several asymptotic results on the existence of lim v_n and lim v_lambda and their equality to lim v_Theta. This analysis applies in particular to stochastic games and repeated games of incomplete information.
2000
Cohen, D. . (2000). A Rational Basis for Irrational Beliefs and Behaviors. Discussion Papers. presented at the 1. Retrieved from ' Publisher's VersionAbstract
No Abstract
Mas-Colell, S. H., & Andreu, . (2000). A Reinforcement Procedure Leading to Correlated Equilibrium. Discussion Papers. presented at the 8, G. Debreu, W. Neuefeind & W. Trockel (eds.), Economic Essays: A Festschrift for Werner Hildenbrand, Springer (2001), 181-200. Retrieved from Publisher's VersionAbstract
We consider repeated games where at any period each player knows only his set of actions and the stream of payoffs that he has received in the past. He knows neither his own payoff function, nor the characteristics of the other players (how many there are, their strategies and payoffs). In this context, we present an adaptive procedure for play - called "modified-regret- matching" - which is interpretable as a stimulus-response or reinforcement procedure, and which has the property that any limit point of the empirical distribution of play is a correlated equilibrium of the stage game.
Schul, I. Y., & Yaacov, . (2000). Acceptance and Elimination Procedures in Choice: Non-Complementarity and the Role of Implied Status Quo. Discussion Papers. presented at the 2, Organizational Behavior and Human Decision Processes 82 (2000), 293-313. Retrieved from /files/dp211.pdf Publisher's VersionAbstract
The present research contrasts two seemingly complementary decision strategies: acceptance and elimination. In acceptance, a choice set is created by including suitable alternatives from an initial set of alternatives, whereas in elimination it is created by removing inappropriate alternatives from that same initial set. The research used realistic career decision-making scenarios and presented to respondents sets of alternatives that varied in their pre-experimental strength values. Whereas complementarity of acceptance and elimination is implied by three standard (normative) assumptions of decision theory, we find a systematic discrepancy between the outcomes of these procedures: choice sets were larger in elimination than in acceptance. This acceptance/elimination discrepancy is directly tied to sub-complementarity. The central tenet of the theoretical framework developed here is that acceptance and elimination procedures imply different types of status quo for the alternatives, thereby invoking a different selection criterion for each procedure. A central prediction of the dual-criterion framework is the "middling" alternatives should be most susceptible to the type of procedure used. The present studies focus on this prediction which is substantiated by the results showing that "middling" alternatives yield the greatest discrepancy between acceptance and elimination. The implications of this model and findings for various research domains are discussed.
Kleinberger, I. Y., & Eli, . (2000). Advice Taking in Decision Making: Egocentric Discounting and Reputation Formation. Discussion Papers. presented at the 2, Organizational Behavior and Human Decision Processes 83 (2000), 260-281. Retrieved from /files/dp212.pdf Publisher's VersionAbstract
Our framework for understanding advice-taking in decision making rests on two theoretical concepts that motivate the studies and serve to explain the findings. The first is egocentric discounting of others' opinion and the second is reputation formation for advisors. We review the evidence for these concepts, trace their theoretical origins, and point out some of their implications. In three studies we measured decision makers' "weighting policy" for the advice, and in a fourth study, their "willingness to pay" for it. Briefly, we found that advice is discounted relative to own opinion, and reputation for advisors is rapidly formed and asymmetrically revised. The asymmetry implies that it may be easier for advisors to lose a good reputation than to gain it. The cognitive and social origins of these phenomena are considered.
Gorfine, R. N., & Malka, . (2000). Analysing Data of Intergroup Prisoner's Dilemma Game. Discussion Papers. presented at the 3. Retrieved from /files/dp215.ps Publisher's VersionAbstract
The Intergroup Prisoner's Dilemma (IPD) game was suggested by Bornstein (1992) for modeling intergroup conflicts over continuous public goods. We analyze data of an experiment in which the IPD game was played for 150 rounds, under three matching conditions. The objective is to study differences in the investment patterns of players in the different groups. A repeated measures analysis (Goren & Bornstein, 1999) involved data aggregation and strong distributional assumptions. Here we introduce a non-parametric approach based on permutation tests, applied to the raw data. Two new measures, the cumulative investment and the normalized cumulative investment, provide additional insight into the differences between groups. The proposed tests, based on the area under the investment curves, identify overall and pairwise differences between groups. A simultaneous confidence band for the mean difference curve is used to detect games which account for pairwise differences.
Shapira, Z. . (2000). Aspiration Levels and Risk Taking by Government Bond Traders. Discussion Papers. presented at the 11. Retrieved from /files/ zur227.pdf Publisher's VersionAbstract
The management of risk is important in financial institutions. In particular, investment houses dealing with volatile financial markets such as foreign exchange or government bonds may find it difficult ot maintain "proper" levels of risk taking. On one hand, firms encourage traders to take risks in trading government bonds, but on the other, they promote risk aversion since they value reputation as careful and solid investors rather than having a reputation of risk takers. Government bond traders work in a very volatile and fast moving market. They are compensated by a base salary plus a bonus which relates to the profit and loss (P&L) they create for the firm on the securities they trade. Recent models of risk taking (Kahneman and Tversky, 1979; March and Shapira, 1992; Shapira, 1995) suggest that risk taking is affected by the targets or reference points that people use to evaluate risky prospects. Such targets can be set by "objective" grounds, that is, based on some rational economic considerations of profitability. However, often the targets are set in a "comparative" sense, that is, by comparison to the performance of other similar firms. The above models suggest some alternative ways in which targets may affect risk taking. These predictions are tested using data on actual purchase and sell decision made by government bond traders. Implications for risk management are discussed.
Simon, R. S. . (2000). Common Prior Assumption in Belief Spaces: An Example, The. Discussion Papers. presented at the 12. Retrieved from /files/dp228.PDF Publisher's VersionAbstract
With four persons there is an example of a probability space where 1) the space is generated by hierarchies of knowledge concerning a single proposition, 2) the subjective beliefs of the four persons are continuous regular conditional probability distributions of a common prior probability distribution (continuous with respect to the weak topology), and 3) for every subset that the four persons know in common there is no common prior probability distribution. Furthermore, for every measurable set, every person, and at every point in the space, the subjective belief in this measurable set is one of the quantities 0, 1/2 or 1. This example presents problems for understanding games of incomplete information through common priors.
Simon, R. S. . (2000). Epsilon-Equilibria in Non-Zero-Sum Stochastic Games with Finitely Many States, An Existence Proof Using Discount Factors. Discussion Papers. presented at the 8. Retrieved from ' Publisher's VersionAbstract
This paper proves the existence of epsilon-equilibria in non-zero-sum positive recursive stochastic games with finitely many states, using a kind of discount factor.
Assaf Ben-Shoham, R. S., & Volij, O. . (2000). Evolution of Exchange, The. Discussion Papers. presented at the 5. Retrieved from /files/dp219.pdf Publisher's VersionAbstract
Stochastic stability is applied to the problem of exchange. We analyze the stochastic stability of two dynamic trading processes in a simple housing market. In both models traders meet in pairs at random and exchange their houses when trade is mutually beneficial, but occasionally they make mistakes. The models differ in the probability of mistakes. When all mistakes are equally likely, the set of stochastically stable allocations contains the set of efficient allocations. When more serious mistakes are less likely, the stochastically stable states are those allocations, always efficient, with the lowest envy-level.
Tamar Keasar, Ella Rashkovich, D. C., & Shmida, A. . (2000). Foraging Bees in Two-Armed Bandit Situations: Laboratory Experiments and Possible Decision Rules. Discussion Papers. presented at the 10, Behavioral Ecology 13 (2002), 757-765. Retrieved from ' Publisher's VersionAbstract
In multi-armed bandit situations, gamblers must choose repeatedly between options that differ in reward probability, without prior information on the options' relative profitability. Foraging bumblebees encounter similar situations when choosing repeatedly among flower species that differ in food rewards. Unlike proficient gamblers, bumblebees do not choose the highest-rewarding option exclusively. We simulated two-armed bandit situations in laboratory experiments to characterize this choice behavior.
Cahn, A. . (2000). General Procedures Leading to Correlated Equilibria. Discussion Papers. presented at the 5, International Journal of Game Theory 33 (2004), 21-40. Retrieved from /files/dp216.pdf Publisher's VersionAbstract
Hart and Mas-Colell (2000) show that if all players play "regret matching" strategies, i.e. they play with probabilities proportional to the regrets, then the empirical distributions of play converge to the set of correlated equilibria, and the regrets of each player converge to zero. Here we show that if only one player, say player i , plays according to these probabilities, while the other players are "not too sophisticated", then the result that player i's regrets converge to zero continues to hold. The condition of "not too sophisticated" essentially says that the effect of one change of action of player i on the future actions of the other players decreases to zero as the horizon goes to infinity. Furthermore, we generalize all these results to a whole class of "regret based" strategies. In particular, these include the "smooth fictitious play" of Fudenberg and Levine (1998).
Volij, O. . (2000). In Defense of DEFECT. Discussion Papers. presented at the 5, Games and Economic Behavior 39 (2000), 309-321. Retrieved from /files/dp220.pdf Publisher's VersionAbstract
The one-state machine that always defects is the only evolutionarily stable strategy in the machine game that is derived from the prisoner's dilemma, when preferences are lexicographic in complexity. This machine is the only stochastically stable strategy of the machine game when players are restricted to choosing machines with a uniformly bounded complexity.
Muriel Ney-Nifle, T. K., & Shmida, A. . (2000). Location and Color Learning in Bumblebees in a Two-Phase Conditioning Experiment. Discussion Papers. presented at the 2, Journal of Insect Behavior 14 (2001), 697-711. Retrieved from /files/db213.pdf Publisher's VersionAbstract
Bees learn the location, odor, color and shape of flowers, and use these cues hierarchically to make dietary choices. If two such cues always appear together, they provide the bees with identical information about their food source. In such a situation, bees may base dietary choices on one cue and ignore the other, or they may continue to consider both cues. We studied this question by allowing bumblebees to forage on two patches of artificial flowers that differed in location, color and presence of reward in a two-phase laboratory experiment. We switched either the display color, the location, or both color and location associated with the rewarding patch between experimental phases. We tested for the effects of switches by comparing the bees' choices across treatments, and by evaluating each bee's performance before and after the change. In our analysis we characterized the different patterns of visits to empty flowers by a plot of the cumulative frequency of such visits over time. This plot enabled us to identify two regimes: ( I ) a learning regime, when new associations between reward and display cues are formed, followed by (2) a steady-state where bees make periodic visits to the empty patch. We used likelihood analysis to estimate the length of short-term memory that can account for the bees' steady-state foraging choices. The bees' performance decreased immediately following a switch in location of the rewarding patch. Switches in both reward color and location elicited a similar decrease to switches in location only. No temporary decrease in foraging performance occurred when only color of the rewarding patch was changed, and in no-change controls. The bees' flower choices at steady-state were most likely generated by a short-term memory of the last 4-6 flower visits.