Publications

2017
Meir Barneron Ilan Yaniv Johannes M¼ller-Trede, Shoham Choshen-Hillel. 2017. “Wisdom of Crowds in Matters of Taste, The”. Publisher's Version Abstract
Decision makers can often improve the accuracy of their judgments on factual matters by consulting 'crowds  of others for their respective opinions. In this article, we investigate whether decision makers could similarly draw on crowds to improve the accuracy of their judgments about their own tastes and hedonic experiences. We present a theoretical model which states that accuracy gains from consulting a crowds judgments of taste depend on the interplay among taste discrimination, crowd diversity, and the similarity between the crowd s preferences and those of the decision maker. The model also delineates the boundary conditions for such 'crowd wisdom.  Evidence supporting our hypotheses was found in two laboratory studies in which decision makers made judgments about their own enjoyment of musical pieces and short films. Our findings suggest that, although different people may have different preferences and inclinations, their judgments of taste can benefit from the wisdom of crowds.
2016
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore one needs to model the participants using a different decision model. To this end, we present the MINthenMAX decision model under ambiguity. This model is a refinement of Wald s MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize MINthenMAX as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present here also follows the common practice in computer science of worst-case analysis. Finally, we define and analyze the corresponding equilibrium concept, when all players follow MINthenMAX. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists, and we analyze the equilibria.
Abstract: Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings. This paper explores, from a normative point of view, one of the limits that ought to be imposed on the capacity of corporations to be treated "as if" they had a human nature, their recognition as legitimate bearers of basic human rights. The assertion that corporations, like living persons, are entitled to constitutional protection was famously brought to the fore by a number of recent Supreme Court cases, most notably the Citizens United and the Hobby Lobby cases. In the rational choice analysis that follows this paper reveals that the new jurisprudence emanating from Citizens United may be justified in the relatively insignificant cases of small companies with egalitarian distribution of shares, but ought to be rejected in the more meaningful cases of large public corporations with controlling stockholders. The ruling in Hobby Lobby, on the other hand, can be defended regardless of the size of the corporation or the composition of its owners. In both of these cases it is not the rights of the corporate entity which is truly at stake and the final outcome ought to hinge on the constitutional rights of real human beings.
Eyal Winter Uriel Procaccia. 2016. “Corporate Crime and Plea Bargains”. Publisher's Version Abstract
Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings, and hence their legal capacity to bear rights and obligations of their own is not universal. This paper explores, from a normative point of view, one of the limits that ought to be set on the capacity of corporations to act "as if" they had a human nature, their capacity to commit crime. Accepted wisdom has it that corporate criminal liability is justified as a measure to deter criminal behavior. Our analysis supports this intuition in one subset of cases, but also reveals that deterrence might in fact be undermined in another subset of cases, especially in an environment saturated with plea bargains involving serious violations of the law.
Barton L. Lipman Elchanan Ben-Porath, Eddie Dekel. 2016. “Disclosure and Choice”. Publisher's Version Abstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
Elchanan Ben-Porath. 2016. “Disclosure and Choice”. Publisher's Version Abstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
An evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative (mutant) strategies. Unlike Nash equilibria, ESS do not always exist in finite games. In this paper we address the question of what happens when the size of the game increases: does an ESS exist for almost every  large game? We let the entries of an n –- n game matrix be independently randomly chosen according to a symmetrical subexponential distribution F, and study the expected number of ESS with support of size d as n †’ ˆ\v z. In a previous paper by Hart, Rinott and Weiss [6] it was shown that this limit is 1 2 for d = 2. This paper deals with the case of d ¥ 4, and proves the conjecture in [6] (Section 6,c), that the expected number of ESS with support of size d ¥ 4 is 0. Furthermore, it discusses the classic problem of the number of facets of a convex hull of n random points in Rd, and relates it to the above ESS problem. Given a collection of i.i.d. random points, our result implies that the expected number of facets of their convex hull converges to 2d as n †’ ˆ\v z.
The Gibbard-Satterthwaite Impossibility Theorem (Gibbard, 1973; Satterthwaite, 1975) holds that dictatorship is the only unanimous and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single peaked preferences, house matching, and of quasilinear preferences, there are appealing unanimous and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, recently introduced by Li (2015). A social choice function is obviously strategyproof implementable (OSP) implementable if even cognitively limited agents can recognize their strategies as weakly dominant. For single-peaked preferences, we characterize the class of OSP-implementable and unanimous social choice rules as dictatorships with safeguards against extremism mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome which is too extreme. Median voting is consequently not OSP-implementable. Indeed the only OSP-implementable quantile rules either choose the minimal or the maximal ideal point. For house matching, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li (2015) shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove.
There is a long history of experiments, in which participantsare instructed to generate a long sequence of binary random numbers.The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper,we usedgeneralized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, weused logistic regression analysis in order to characterize the temporal sequence of participants’ choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seem irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect area monotonousdecreasing function of the delay, yet these individual sequential effectsare largely averaged outin a population analysis because of heterogeneity.The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation.Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in therandom sequence generation task, different participants adoptdifferent cognitive strategiesto suppress sequential dependencies when generating the “random” sequences.
This philosophical work lays the groundwork for a game-theoretic account of (romantic) love, substantiating the folk-psychological conception of love as 'a unification of souls'. It does so by setting up an appropriate universal framework of cognitive agency, that accommodates such unifications and motivates them. This framework applies the gene s eye view of evolution to the evolution of cognition, integrating it with a distributed, dynamic theory of selfhood "and the game-theoretic principles of agent-unification that govern these dynamics. The application of this framework to particular biological settings produces love as a theoretical evolutionary prediction (unveiling its rationality). Through this, the connection of the strategic normativity to love's real-life behavioral and phenomenological expressions is systematically explored.
Following Dietrich (2014) we consider using choice by plurality voting (CPV) as a judgment aggregation correspondence. We notice that a result of Roberts (1991) implies that CPV is axiomatically characterized by anonymity, neutrality, unanimity, and (Young s) reinforcement. Following List (2004) and Dietrich (2015) we construct a sequential voting procedure of judgement aggregation which satisfies rationality, anonymity, unanimity, and independence of irrelevant propositions (with respect to a relevance correspondence that does not satisfy transitivity). We offer a tentative characterization for this aggregation procedure
Uzi Segal David Heyd, Uriel Procaccia. 2016. “Thorny Quest for a Rational Constitution, The”. Publisher's Version
2015
Peter Sudholter Bezalel Peleg. 2015. “On Bargaining Sets of Convex NTU Games”. Publisher's Version Abstract
We show that the Aumann-Davis-Maschler bargaining set and the Mas-Colell bargaining set of a non-leveled NTU game that is either ordinal convex or coalition merge convex coincides with the core of the game. Moreover, we show by means of an example that the foregoing statement may not be valid if the NTU game is marginal convex.
Miller and Sanjurjo (2015) suggest that many analyses of the hot hand and the gambler s fallacies are subject to a bias. The purpose of this note is to describe our understanding of their main point in terms we hope are simpler and more accessible to non-mathematicians than is the original.
Nori Jacoby Yonatan Loewenstein Merav Ahissar Sagi Jaffe-Dax, Ofri Raviv. 2015. “A Computational Model of Implicit Memory Captures Dyslexics Perceptual Deficits”. Publisher's Version Abstract
Dyslexics are diagnosed for their poor reading skills. Yet they characteristically also suffer from poor verbal memory, and often from poor auditory skills. To date, this combined profile has been accounted for in broad cognitive terms. Here, we hypothesize that the perceptual deficits associated with dyslexia can be understood computationally as a deficit in integrating prior information with noisy observations. To test this hypothesis we analyzed the performance of human participants in an auditory discrimination task using a two-parameter computational model. One parameter captures the internal noise in representing the current event, and the other captures the impact of recently acquired prior information. Our findings show that dyslexics perceptual deficit can be accounted for by inadequate adjustment of these components; namely, low weighting of their implicit memory of past trials relative to their internal noise. Underweighting the stimulus statistics decreased dyslexics ability to compensate for noisy observations. ERP measurements (P2 component) while participants watched a silent movie, indicated that dyslexics perceptual deficiency may stem from poor automatic integration of stimulus statistics. Taken together, this study provides the first description of a specific computational deficit associated with dyslexia.
Classically, risk aversion is equated with concavity of the utility function. In this work we explore the conceptual foundations of this definition. In accordance with neo-classical economics, we seek an ordinal definition, based on the decisions maker s preference order, independent of numerical values. We present two such definitions, based on simple, conceptually appealing interpretations of the notion of risk-aversion. We then show that when cast in quantitative form these ordinal definitions coincide with the classical Arrow-Pratt definition (once the latter is defined with respect to the appropriate units), thus providing a conceptual foundation for the classical definition. The implications of the theory are discussed, including, in particular, to the understanding of insurance. The entire study is within the expected utility framework.
Building on the work of Nash, Harsanyi, and Shapley, we define a cooperative solution for strategic games that takes account of both the competitive and the cooperative aspects of such games. We prove existence in the general (NTU) case and uniqueness in the TU case. Our main result is an extension of the definition and the existence and uniqueness theorems to stochastic games - discounted or undiscounted.
Probability estimation is an essential cognitive function in perception, motor control, and decision making. Many studies have shown that when making decisions in a stochastic operant conditioning task, people and animals behave as if they underestimatethe probability of rare events. It is commonly assumed that this behavior is a natural consequence of estimating a probability from a small sample, also known as sampling bias. The objective of this paper is to challenge this common lore. We show that in fact, probabilities estimated from a small sample can lead to behaviors that will be interpreted as underestimatingor as overestimating the probability of rare events, depending on the cognitive strategy used. Moreover, this sampling bias hypothesis makes an implausible prediction that minute differences in the values of the sample size or the underlying probability will determine whether rare events will be underweighted or overweighed. We discuss the implications of this sensitivity for the design and interpretation of experiments. Finally, we propose an alternative sequential learning model with a resetting of initial conditions for probability estimation and show that this model predicts the experimentally-observed robust underweighting of rare events.
Motty Perry Sergiu Hart, Ilan Kremer. 2015. “Evidence Games: Truth and Commitment”. Publisher's Version Abstract
An evidence game is a strategic disclosure game in which an agent who has different pieces of verifiable evidence decides which ones to disclose and which ones to conceal, and a principal chooses an action (a "reward"). The agent's preference is the same regardless of his information (his "type") he always prefers the reward to be as high as possible whereas the principal prefers the reward to fit the agent's type. We compare the setup where the principal chooses the action only after seeing the disclosed evidence, to the setup where the principal can commit ahead of time to a reward policy (the latter is the standard mechanism-design setup). We compare the setup where the principal chooses the action only after seeing the disclosed evidence to the setup where the principal can commit ahead of time to a reward policy (the mechanism-design setup). The main result is that under natural conditions on the truth structure of the evidence, the two setups yield the same equilibrium outcome.