check
Publications | The Federmann Center for the Study of Rationality

Publications

2010
Noga Alon, Yuval Emek, Michal Feldman, and Moshe Tennenholtz. Bayesian Ignorance. Discussion Papers 2010. Web. Publisher's VersionAbstract
We quantify the effect of Bayesian ignorance by comparing the social cost obtained in a Bayesian game by agents with local views to the expected social cost of agents having global views. Both benevolent agents, whose goal is to minimize the social cost, and selfish agents, aiming at minimizing their own individual costs, are considered. When dealing with selfish agents, we consider both best and worst equilibria outcomes. While our model is general, most of our results concern the setting of network cost sharing (NCS) games. We provide tight asymptotic results on the effect of Bayesian ignorance in directed and undirected NCS games with benevolent and selfish agents. Among our findings we expose the counter-intuitive phenomenon that "ignorance is bliss": Bayesian ignorance may substantially improve the social cost of selfish agents. We also prove that public random bits can replace the knowledge of the common prior in attempt to bound the effect of Bayesian ignorance in settings with benevolent agents. Together, our work initiates the study of the effects of local vs. global views on the social cost of agents in Bayesian contexts.
Rinott, Yaakov Malinovsky, and Yosef. Best Invariant And Minimax Estimation Of Quantiles In Finite Populations. Discussion Papers 2010. Web. Publisher's VersionAbstract
We study estimation of finite population quantiles, with emphasis on estimators that are invariant under monotone transformations of the data, and suitable invariant loss functions. We discuss non-randomized and randomized estimators, best invariant and minimax estimators and sampling strategies relative to different classes. The combination of natural invariance of the kind discussed here, and finite population sampling appears to be novel, and leads to interesting statistical and combinatorial aspects.
Hart, Sergiu . Comparing Risks By Acceptance And Rejection. Discussion Papers 2010. Web. Publisher's VersionAbstract
Stochastic dominance is a partial order on risky assets ("gambles") that is based on the uniform preference, of all decision-makers (in an appropriate class), for one gamble over another. We modify this, first, by taking into account the status quo (given by the current wealth) and the possibility of rejecting gambles, and second, by comparing rejections that are substantive (that is, uniform over wealth levels or over utilities). This yields two new stochastic orders: wealth-uniform dominance and utility-uniform dominance. Unlike stochastic dominance, these two orders are complete: any two gambles can be compared. Moreover, they are equivalent to the orders induced by, respectively, the Aumann-Serrano (2008) index of riskiness and the Foster-Hart (2009a) measure of riskiness.
Babichenko, Yakov . Completely Uncoupled Dynamics And Nash Equilibria. Discussion Papers 2010. Web. Publisher's VersionAbstract
A completely uncoupled dynamic is a repeated play of a game, where each period every player knows only his action set and the history of his own past actions and payoffs. One main result is that there exist no completely uncoupled dynamics with finite memory that lead to pure Nash equilibria (PNE) in almost all games possessing pure Nash equilibria. By "leading to PNE" we mean that the frequency of time periods at which some PNE is played converges to 1 almost surely. Another main result is that this is not the case when PNE is replaced by "Nash epsilon-equilibria": we exhibit a completely uncoupled dynamic with finite memory such that from some time on a Nash epsion-equilibrium is played almost surely.
Kareev, Judith Avrahami, and Yaakov. Detecting Change In Partner's Preferences. Discussion Papers 2010. Web. Publisher's VersionAbstract
Studies of the detection of change have commonly been concerned with individuals inspecting a system or a process, whose characteristics were fully determined by the researcher. We, instead, study the detection of change in the preferences - and hence the behavior - of others with whom an individual interacts. More specifically, we study situations in which one's benefits are the result of the joint actions of one and one's partner when at times the preferred combination is the same for both and at times it is not. In other words, what we change is the payoffs associated with the different combinations of interactive choices and then look at choice behavior following such a change. We find that players are extremely quick to respond to a change in the preferences of their counterparts. This responsiveness can be explained by the players' impulsive reaction to regret - if one was due - at their most recent decision.
Linial1, Amir Ban, and Nati. Dynamics Of Reputation Systems, The. Discussion Papers 2010. Web. Publisher's VersionAbstract
Online reputation systems collect, maintain and disseminate reputations as a summary numerical score of past interactions of an establishmentwith its users. As reputation systems, including web search engines, gain inpopularity and become a common method for people to select sought services, adynamical system unfolds: Experts' reputation attracts the potential customers.The experts' expertise affects the probability of satisfying the customers. Thisrate of success in turn influences the experts' reputation. We consider hereseveral models where each expert has innate, constant, but unknown level ofexpertise and a publicly known, dynamically varying, reputation.The specific
Edith Cohen, Michal Feldman, Amos Fiat Haim Kaplan, and Svetlana Olonetsky. Envy-Free Makespan Approximation. Discussion Papers 2010. Web. Publisher's VersionAbstract
We study envy-free mechanisms for scheduling tasks on unrelated machines (agents) that approximately minimize the makespan. For indivisible tasks, we put forward an envy-free poly-time mechanism that approximates the minimal makespan to within a factor of O(logm), where m is the number of machines. We also show a lower bound of Omega(log m/log logm). This improves the recent result of Mu'alem [22] who give an upper bound of (m + 1)/2, and a lower bound of 2 - 1/m. For divisible tasks, we show that there always exists an envy-free poly-time mechanism with optimal makespan. Finally, we demonstrate how our mechanism for envy free makespan minimization can be interpreted as a market clearing problem.
Samuel-Cahn, Jay Bartroff, and Ester. Fighter Problem: Optimal Allocation Of A Discrete Commodity, The. Discussion Papers 2010. Web. Publisher's VersionAbstract
The Fighter problem with discrete ammunition is studied. An aircraft (fighter) equipped with n anti-aircraft missiles is intercepted by enemy airplanes, the appearance of which follows a homogeneous Poisson process with known intensity. If j of the n missiles are spent at an encounter they destroy an enemy plane with probability a(j), where a(0)=0 and a(j) is a known, strictly increasing concave sequence, e.g., a(j)=1 - qj, 0 < 1. If the enemy is not destroyed, the enemy shoots the fighter down with known probability 1 - u, where 0 u 1. The goal of the fighter is to shoot down as many enemy airplanes as possible during a given time period [0,T ]. Let K(n, t) be an optimal number of missiles to be used at a present encounter, when the fighter has flying time t remaining and n missiles remaining. Three seemingly obvious properties of K(n, t) have been conjectured: [A] The closer to the destination, the more of the n missiles one should use, [B] the more missiles one has, the more one should use, and [C] the more missiles one has, the more one should save for possible future encounters. We show that [C] holds for all 0 u 1, that [A] and [B] hold for the "Invincible Fighter" (u = 1), and that [A] holds but [B] fails for the "Frail Fighter" (u = 0).
Samet, Ziv Hellman, and Dov. How Common Are Common Priors?. Discussion Papers 2010. Web. Publisher's VersionAbstract
To answer the question in the title we vary agents' beliefs against the background of a fixed knowledge space, that is, a state space with a partition for each agent. Beliefs are the posterior probabilities of agents, which we call type profiles. We then ask what is the topological size of the set of consistent type profiles, those that are derived from a common prior (or a common improper prior in the case of an infinite state space). The answer depends on what we term the tightness of the partition profile. A partition profile is tight if in some state it is common knowledge that any increase of any single agent's knowledge results in an increase in common knowledge. We show that for partition profiles which are tight the set of consistent type profiles is topologically large, while for partition profiles which are not tight this set is topologically small.
Babichenko, Yakov . How Long To Pareto Efficiency?. Discussion Papers 2010. Web. Publisher's VersionAbstract
We consider uncoupled dynamics (i.e., dynamics where each player knows only his own payoff function) that reach Pareto efficient and individually rational outcomes. We prove that the number of periods it takes is in the worst case exponential in the number of players.
Peretz, Ron . Learning Cycle Length Through Finite Automata. Discussion Papers 2010. Web. Publisher's VersionAbstract
We study the space-and-time automaton-complexity of the CYCLE-LENGTH problem. The input is a periodic stream of bits whose cycle length is bounded by a known number n. The output, a number between 1 and n, is the exact cycle length. We also study a related problem, CYCLE-DIVISOR. In the latter problem the output is a large number that divides the cycle length, that is, a number k >> 1 that divides the cycle length, or (in case the cycle length is small) the cycle length itself. The complexity is measured in terms of the SPACE, the logarithm of the number of states in an automaton that solves the problem, and the TIME required to reach a terminal state. We analyze the worst input against a deterministic (pure) automaton, and against a probabilistic (mixed) automaton. In the probabilistic case we require that the probability of computing a correct output is arbitrarily close to one.We establish the following results: o CYCLE-DIVISOR can be solved in deterministic SPACE o(n), and TIME O(n). o CYCLE-LENGTH cannot be solved in deterministic SPACE X TIME smaller than (n^2). o CYCLE-LENGTH can be solved in probabilistic SPACE o(n), and TIME O(n). o CYCLE-LENGTH can be solved in deterministic SPACE O(nL), and TIME O(n/L), for any positive L < 1.
Halbersberg, Yoed . Liability Standards For Multiple-Victim Torts: A Call For A New Paradigm. Discussion Papers 2010. Web. Publisher's VersionAbstract
Under the conventional approach in torts, liability for an accident is decided by comparing the injurer's costs of precautions with those of the victim, and, under the negligence rule, also with the expected magnitude of harm. In multiplevictim cases, the current paradigm holds that courts should determine liability by comparing the injurer's costs of precautions with the victims' aggregate costs and with their aggregate harm. This aggregative risk-utility test supposedly results in the imposition of liability on the least-cost avoiders of the accident, and, therefore, is assumed efficient. However, this paradigm neglects the importance of the normal differences between tort victims. When victims are heterogeneous with regard to their expected harm or costs of precaution, basing the liability-decision on the aggregate amounts may be incorrect, causing in some cases over-deterrence, while in other, under-deterrence and dilution of liability. A new paradigm is therefore needed. This Article demonstrates how aggregate liability may violate aggregate efficiency, and concludes that decisions based upon aggregate amounts are inappropriate when the victims are heterogeneous-as they typically are in real life. The Article then turns to an exploration of an alternative to the aggregative risk-utility test, and argues for a legal rule that would combine restitution for precaution costs, plus an added small "bonus," with the sampling of victims' claims.
Sheshinski, Eytan . Limits On Individual Choice. Discussion Papers 2010. Web. Publisher's VersionAbstract
Individuals behave with choice probabilities defined by a multinomial logit (MNL) probability distribution over a finite number of alternatives which includes utilities as parameters. The salient feature of the model is that probabilities depend on the choice-set, or domain. Expanding the choice-set decreases the probabilities of alternatives included in the original set, providing positive probabilities to the added alternatives. The wider probability 'spread' causes some individuals to fur- ther deviate from their higher valued alternatives, while others find the added alternatives highly valuable. For a population with diverse preferences, there ex- ists a subset of alternatives, called the optimum choice-set, which balances these considerations to maximize social welfare. The paper analyses the dependence of the optimum choice-set on a parameter which specifies the precision of individuals' choice ('degree of rationality'). It is proved that for high values of this parame- ter the optimum choice-set includes all alternatives, while for low values it is a singleton. Numerical examples demonstrate that for intermediate values, the size and possible nesting of the optimum choice-sets is complex. Governments have various means (defaults, tax/subsidy) to directly a''''ect choice probabilities. This is modelled by 'probability weight'parameters. The paper analyses the structure of the optimum weights, focusing on the possible exclusion of alternatives. A binary example explores the level of 'type one'and 'type two'errors which justify the imposition of early eligibility for retirement benefits, common to social security systems. Finally, the e''''ects of heterogeneous degrees of rationality among individuals are briefly discussed.
Bar-Hillel, Maya . Maya Bar-Hillel. Discussion Papers 2010. Web. Publisher's VersionAbstract
Scientists try to find out the truth about our world. Judges in a court of law try to find out the truth about the target events in the indictment. What are the similarities, and what are the differences, in the procedures that govern the search for truth in these two systems? In particular, why are quantitative tools the hallmark of science, whereas in courts they are rarely used, and when used, are prone to error? (In Hebrew)
Harel, Moses Shayo, and Alon. Non-Consequentialist Voting. Discussion Papers 2010. Web. Publisher's VersionAbstract
Standard theory assumes that voters' preferences over actions (voting) are induced by their preferences over electoral outcomes (policies, candidates). But voters may also have non-consequentialist (NC) motivations: they may care about how they vote even if it does not a''''ect the outcome. When the likelihood of being pivotal is small, NC motivations can dominate voting behavior. To examine the prevalence of NC motivations, we design an experiment that exogenously varies the probability of being pivotal yet holds constant other features of the decision environment. We find a significant e''''ect, consistent with at least 12.5% of subjects being motivated by NC concerns.
Jay Bartroff, Larry Goldstein, Yosef Rinott, and Ester Samuel-Cahn. On Optimal Allocation Of A Continuous Resource Using An Iterative Approach And Total Positivity. Discussion Papers 2010. Web. Publisher's VersionAbstract
We study a class of optimal allocation problems, including the well-known Bomber Problem, with the following common probabilistic structure. An aircraft equipped with an amount x of ammunition is intercepted by enemy airplanes arriving according to a homogenous Poisson process over a fixed time duration t. Upon encountering an enemy, the aircraft has the choice of spending any amount 0
Halbersberg, Yoed . On The Deduction Of National Insurance Payments From Tort Victims' Claims. Discussion Papers 2010. Web. Publisher's VersionAbstract
In CA 1093/07 Bachar v. Fokmann [2009] (request for additional hearing denied, 2010) , the Israeli Supreme Court has formed a formula for calculating the deduction of NII payments from a tort victim's claim, when only some of the victim's impairment is causally linked to the tortious act in question. Overall, six Supreme Court Justices have reviewed and affirmed this simple formula. However, this formula is incorrect, as it contradicts some of the most basic tort premises, ignores the way impairment is calculated, and necessarily leads to the under-compensation of the victim, and to an unjust enrichment of either the tortfeasor, the National Insurance Institute, or both. This Article, therefore, calls for the adoption of a different formula that is both legally and arithmetically correct.
Bezalel Peleg, Peter Sudh¶lter, Jos\copyright M. Zarzuelo . On The Impact Of Independence Of Irrelevant Alternatives. Discussion Papers 2010. Web. Publisher's VersionAbstract
On several classes of n-person NTU games that have at least one Shapley NTU value, Aumann characterized this solution by six axioms: Non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives (IIA). Each of the first five axioms is logically independent of the remaining axioms, and the logical independence of IIA is an open problem. We show that for n = 2 the first five axioms already characterize the Shapley NTU value, provided that the class of games is not further restricted. Moreover, we present an example of a solution that satisffies the first 5 axioms and violates IIA for 2-person NTU games (N;V) with uniformly p-smooth V(N).
Marco Francesconi, Christian Ghiglino, and Motty Perry. On The Origin Of The Family. Discussion Papers 2010. Web. Publisher's VersionAbstract
This paper presents an overlapping generations model to explain why humans live in families rather than in other pair groupings. Since most non-human species are not familial, something special must be behind the family. It is shown that the two necessary features that explain the origin of the family are given by uncertain paternity and overlapping cohorts of dependent children. With such two features built into our model, and under the assumption that individuals care only for the propagation of their own genes, our analysis indicates that fidelity families dominate promiscuous pair bonding, in the sense that they can achieve greater survivorship and enhanced genetic fitness. The explanation lies in the free riding behavior that characterizes the interactions between competing fathers in the same promiscuous pair grouping. Kin ties could also be related to the emergence of the family. When we consider a kinship system in which an adult male transfers resources not just to his offspring but also to his younger siblings, we find that kin ties never emerge as an equilibrium outcome in a promiscuous environment. In a fidelity family environment, instead, kinship can occur in equilibrium and, when it does, it is efficiency enhancing in terms of greater survivorship and fitness. The model can also be used to shed light on the issue as to why virtually all major world religions are centered around the importance of the family.
Moldovanu, Alex Gershkov, and Benny. Optimal Search, Learning And Implementation. Discussion Papers 2010. Web. Publisher's VersionAbstract
We characterize the incentive compatible, constrained efficient policy ("second-best") in a dynamic matching environment, where impatient, privately informed agents arrive over time, and where the designer gradually learns about the distribution of agents' values. We also derive conditions on the learning process ensuring that the complete-information, dynamically efficient allocation of resources ("first-best") is incentive compatible. Our analysis reveals and exploits close, formal relations between the problem of ensuring implementable allocation rules in our dynamic allocation problems with incomplete information and learning, and between the classical problem, posed by Rothschild [19], of finding optimal stopping policies for search that are characterized by a reservation price property .