2007
Bornstein, Gary .
“A Classification Of Games By Player Type”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractIn this paper I classify situations of interdependent decision-making, or games based on the type of decision-makers, or players involved. The classification builds on a distinction between three basic types of decision-making agents: individuals, cooperative or unitary groups – groups whose members can reach a binding (and costless) agreement on a joint strategy – and non-cooperative groups – groups whose members act independently without being able to make a binding agreement. Pitting individuals, unitary groups, and non-cooperative groups against one another, and adding Nature as a potential opponent , generates a 3 (type of agent) X 4 (type of opponent) matrix of social situations. This framework is used to review the experimental decision-making literature and point out the gaps that still exist in it.
Lehmann, Daniel .
“A Presentation Of Quantum Logic Based On An And Then Connective”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWhen a physicist performs a quantic measurement, new information about the system at hand is gathered. This paper studies the logical properties of how this new information is combined with previous information. It presents Quantum Logic as a propositional logic under two connectives: negation and the and then operation that combines old and new information. The and then connective is neither commutative nor associative. Many properties of this logic are exhibited, and some small elegant subset is shown to imply all the properties considered. No independence or completeness result is claimed. Classical physical systems are exactly characterized by the commutativity, the associativity, or the monotonicity of the and then connective. Entailment is defined in this logic and can be proved to be a partial order. In orthomodular lattices, the operation proposed by Finch in [3] satisfies all the properties studied in this paper. All properties satisfied by Finch's operation in modular lattices are valid in Quantum Logic. It is not known whether all properties of Quantum Logic are satisfied by Finch's operation in modular lattices.
Jean-Francois Mertens, Abraham Neyman, and Dinah Rosenberg.
“Absorbing Games With Compact Action Spaces”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWe prove that games with absorbing states with compact action sets have a value.
Serrano, Robert J. Aumann, and Roberto.
“An Economic Index Of Riskiness”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractDefine the riskiness of a gamble as the reciprocal of the absolute risk aversion (ARA) of an individual with constant ARA who is indifferent between taking and not taking that gamble. We characterize this index by axioms, chief among them a "duality" axiom which, roughly speaking, asserts that less risk-averse individuals accept riskier gambles. The index is homogeneous of degree 1, monotonic with respect to first and second order stochastic dominance, and for gambles with normal distributions, is half of variance/mean. Examples are calculated, additional properties derived, and the index is compared with others in the literature.
Ariel D. Procaccia, Michal Feldmany, and Jeffrey S. Rosenschein.
“Approximability And Inapproximability Of Dodgson And Young Elections”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractThe voting rules proposed by Dodgson and Young are both designed to find the candidate closest to being a Condorcet winner, according to two different notions of proximity; the score of a given candidate is known to be hard to compute under both rules. In this paper, we put forward an LP-based randomized rounding algorithm which yields an O(log m) approximation ratio for the Dodgson score, where m is the number of candidates. Surprisingly, we show that the seemingly simpler Young score is NP-hard to approximate by any factor.
Hart, Dean P. Foster, and Sergiu.
“An Operational Measure Of Riskiness”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWe define the riskiness of a gamble g as that unique number R(g) such that no-bankruptcy is guaranteed if and only if one never accepts gambles whose riskiness exceeds the current wealth.
Karni, Edi .
“Bayesian Decision Theory And The Representation Of Beliefs”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractIn this paper, I present a Bayesian decision theory and define choice-based subjective probabilities that faithfully represent Bayesian decision makers prior and posterior beliefs regarding the likelihood of the possible effects contingent on his actions. I argue that no equivalent results can be obtained in Savage s (1954) subjective expected utility theory and give an example illustrating the potential harm caused by ascribing to a decision maker subjective probabilities that do not represent his beliefs.
Abba M. Krieger, Moshe Pollak, and Ester Samuel-Cahn.
“Beat The Mean: Better The Average”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWe consider a sequential rule, where an item is chosen into the group, such as a university faculty member, only if his score is better than the average score of those already belonging to the group. We study four variables: The average score of the members of the group after k items have been selected, the time it takes (in terms of number of observed items) to assemble a group of k items, the average score of the group after n items have been observed, and the number of items kept after the first n items have been observed. We develop the relationships between these variables, and obtain their asymptotic behavior as k (respectively, n) tends to infinity. The assumption throughout is that the items are independent, identically distributed, with a continuous distribution. Though knowledge of this distribution is not needed to implement the selection rule, the asymptotic behavior does depend on the distribution. We study in some detail the Exponential, Pareto and Beta distributions. Generalizations of the "better than average" rule to the ² better than average rules are also considered. These are rules where an item is admitted to the group only if its score is better than ² times the present average of the group, where ² > 0.
Zapechelnyuk, Andriy .
“Better-Reply Strategies With Bounded Recall”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractA decision maker (an agent) is engaged in a repeated interaction with Nature. The objective of the agent is to guarantee to himself the long-run average payoff as large as the best-reply payoff to Nature’s empirical distribution of play, no matter what Nature does. An agent with perfect recall can achieve this objective by a simple better-reply strategy. In this paper we demonstrate that the relationship between perfect recall and bounded recall is not straightforward: An agent with bounded recall may fail to achieve this objective, no matter how long recall he has and no matter what better-reply strategy he employs.
Feldman, Yuval Emek, and Michal.
“Computing An Optimal Contract In Simple Technologies”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWe study an economic setting in which a principal motivates a team of strategic agents to exert costly effort toward the success of a joint project. The action taken by each agent is hidden and affects the (binary) outcome of the agent's individual task stochastically. A Boolean function, called technology, maps the individual tasks' outcomes into the outcome of the whole project. The principal induces a Nash equilibrium on the agents' actions through payments that are conditioned on the project's outcome (rather than the agents' actual actions) and the main challenge is that of determining the Nash equilibrium that maximizes the principal's net utility, referred to as the optimal contract. Babaioff, Feldman and Nisan [1] suggest and study a basic combinatorial agency model for this setting. Here, we concentrate mainly on two extreme cases: the AND and OR technologies. Our analysis of the OR technology resolves an open question and disproves a conjecture raised in [1]. In particular, we show that while the AND case admits a polynomial-time algorithm, computing the optimal contract in the OR case is NP-hard. On the positive side, we devise an FPTAS for the OR case, which also sheds some light on optimal contract approximation of general technologies.
Ullmann-Margalit, Edna .
“Difficult Choices: To Agonize Or Not To Agonize?”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWhat makes a choice difficult, beyond being complex or difficult to calculate? Characterizing difficult choices as posing a special challenge to the agent, and as typically involving consequences of significant moment as well as clashes of values, the article proceeds to compare the way difficult choices are handled by rational choice theory and by the theory that preceded it, Kurt Lewin's "conflict theory." The argument is put forward that within rational choice theory no choice is in principle difficult: if the object is to maximize some value, the difficulty can be at most calculative. Several prototypes of choices that challenge this argument are surveyed and discussed (picking, multidimensionality, "big decisions" and dilemmas); special attention is given to difficult choices faced by doctors and layers. The last section discusses a number of devices people employ in their attempt to cope with difficult choices: escape, "reduction" to non-difficult choices, and second-order strategies.
Kareev, Judith Avrahami, and Yaakov.
“Distribution Of Resources In A Competitive Environment”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractWhen two agents of unequal strength compete, the stronger one is expected to always win the competition. This expectation is based on the assumption that evaluation of performance is flawless. If, however, the agents are evaluated on the basis of only a small sample of their performance, the weaker agent still stands a chance of winning occasionally. A theoretical analysis indicates that for this to happen, the weaker agent must introduce variability into the effort he or she invests in the behavior, such that on some occasions the weaker agent's level of performance is as high as that of the stronger agent, whereas on others it is . This, in turn, would drive the stronger agent to introduce variability into his or her behavior. We model this situation in a game, present its game-theoretic solution, and report an experiment, involving 144 individuals, in which we tested whether players are actually sensitive to their relative strengths and know how to allocate their resources given those relative strengths. Our results indicate that they do.
Hart, Sergiu, and Benjamin Weiss.
“Evolutionarily Stable Strategies Of Random Games, And The Vertices Of Random Polygons”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractAn evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative ("mutant") strategies. Unlike Nash equilibria, ESS do not always exist in finite games. In this paper, we address the question of what happens when the size of the game increases: does an ESS exist for "almost every large" game? Letting the entries in the n x n game matrix be randomly chosen according to an underlying distribution F, we study the number of ESS with support of size 2. In particular, we show that, as n goes to infinity, the probability of having such an ESS: (i) converges to 1 for distributions F with "exponential and faster decreasing tails" (e.g., uniform, normal, exponential); and (ii) it converges to 1 - 1/sqrt(e) for distributions F with "slower than exponential decreasing tails" (e.g., lognormal, Pareto, Cauchy). Our results also imply that the expected number of vertices of the convex hull of n random points in the plane converges to infinity for the distributions in (i), and to 4 for the distributions in (ii).
Yaakov Kareev, Klaus Fiedler, and Judith Avrahami.
“Expected Prediction Accuracy And The Usefulness Of Contingencies”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractRegularities in the environment are used to decide what course of action to take and how to prepare for future events. Here we focus on the utilization of regularities for prediction and argue that the commonly considered measure of regularity - the strength of the contingency between antecedent and outcome events - does not fully capture the goodness of a regularity for predictions. We propose, instead, a new measure - the level of expected prediction accuracy (ExpPA) - which takes into account the fact that, at times, maximal prediction accuracy can be achieved by always predicting the same, most prevalent outcome, and in others, by predicting one outcome for one antecedent and another for the other. Two experiments, testing the ExpPA measure in explaining participants' behavior, found that participants are sensitive to the twin facets of ExpPA and that prediction behavior is best explained by this new measure.
Procaccia, Bezalel Peleg, and Ariel D. “Implementation By Mediated Equilibrium”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractImplementation theory tackles the following problem: given a social choice correspondence, find a decentralized mechanism such that for every constellation of the individuals' preferences, the set of outcomes in equilibrium is exactly the set of socially optimal alternatives (as specified by the correspondence). In this paper we are concerned with implementation by mediated equilibrium; under such an equilibrium, a mediator coordinates the players' strategies in a way that discourages deviation. Our main result is a complete characterization of social choice correspondences which are implementable by mediated strong equilibrium. This characterization, in addition to being strikingly concise, implies that some important social choice correspondences which are not implementable by strong equilibrium are in fact implementable by mediated strong equilibrium.
Medina, Ehud Guttel, and Barak.
“Less Crime, More (Vulnerable) Victims: Game Theory And The Distributional Effects Of Criminal Sanctions”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractHarsh sanctions are conventionally assumed to primarily benefit vulnerable targets. Contrary to this perception, this article shows that augmented sanctions often serve the less vulnerable targets. While decreasing crime, harsher sanctions also induce the police to shift enforcement efforts from more to less vulnerable victims. When this shift is substantial, augmented sanctions exacerbate–rather than reduce–the risk to vulnerable victims. Based on this insight, this article suggests several normative implications concerning the efficacy of enhanced sanctions, the importance of victims' funds,and the connection between police operations and apprehension rates.
Procaccia, Bezalel Peleg, and Ariel D. “Mediators Enable Truthful Voting”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractThe Gibbard-Satterthwaite Theorem asserts the impossibility of designing a non-dictatorial voting rule in which truth-telling always constitutes a Nash equilibrium. We show that in voting games of complete information where a mediator is on hand, this troubling impossibility result can be alleviated. Indeed, we characterize families of voting rules where, given a mediator, truthful preference revelation is always in strong equilibrium. In particular, we observe that the family of feasible elimination procedures has the foregoing property.
Weiss, Gusztav Morvav, and Benjamin, Nathans.
“On Sequential Estimation And Prediction For Discrete Time Series”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractThe problem of extracting as much information as possible from a sequence of observations of a stationary stochastic process X0,X1, ¦,Xn has been considered by many authors from different points of view. It has long been known through the work of D. Bailey that no universal estimator for P(Xn+1|X0,X1, ...Xn) can be found which converges to the true estimator almost surely. Despite this result, for restricted classes of processes, or for sequences of estimators along stopping times, universal estimators can be found. We present here a survey of some of the recent work that has been done along these lines.
Rinott, Micha Mandel, and Yosef.
“On Statistical Inference Under Selection Bias”.
Discussion Papers 2007. Web.
Publisher's VersionAbstractThis note revisits the problem of selection bias, using a simple binomial example. It focuses on selection that is introduced by observing the data and making decisions prior to formal statistical analysis. Decision rules and interpretation of confidence measure and results must then be taken relative to the point of view of the decision maker, i.e., before selection or after it. Such a distinction is important since inference can be considerably altered when the decision maker's point of view changes. This note demonstrates the issue, using both the frequentist and the Bayesian paradigms.