Publications

2017
ABRAHAM NEYMAN, ELON KOHLBERG . Cooperative Strategic Games. Discussion Papers 2017. Web. Publisher's VersionAbstract
We examine a solution concept, called the value,  for n-person strategic games. In applications, the value provides an a-priori assessment of the monetary worth of a player s position in a strategic game, comprising not only the player s contribution to the total payoff but also the player s ability to inflict losses on other players. A salient feature is that the value takes account of the costs that spoilers  impose on themselves. Our main result is an axiomatic characterization of the value. For every subset, S, consider the zero-sum game played between S and its complement, where the players in each of these sets collaborate as a single player, and where the payoff is the difference between the sum of the payoffs to the players in S and the sum of payoffs to the players not in S. We say that S has an effective threat if the minmax value of this game is positive. The first axiom is that if no subset of players has an effective threat then all players are allocated the same amount. The second axiom is that if the overall payoff to the players in a game is the sum of their payoffs in two unrelated games then the overall value is the sum of the values in these two games.
Elon Kohlberg, Abraham Neyman . Games Of Threats. Discussion Papers 2017. Web. Publisher's VersionAbstract
A game of threats on a finite set of players, $N$, is a function $d$ that assigns a real number to any coalition, $S subseteq N$, such that $d left( S right) = - d left( N setminus S right)$. A game of threats is not necessarily a coalitional game as it may fail to satisfy the condition $d left( emptyset right) = 0$. We show that analogs of the classic Shapley axioms for coaltional games determine a unique value for games of threats. This value assigns to each player an average of the threat powers, $d left( S right)$, of the coalitions that include the player.
Shira Cohen-Zimerman, Ran R. Hassin . Implicit Motivation Makes The Brain Grow Younger: Improving Executive Functions Of Older Adults. Discussion Papers 2017. Web. Publisher's VersionAbstract
The dominant view of cognitive aging holds that while controlled processes (e.g., working memory and executive functions) decline with age, implicit (automatic) processes do not. In this paper we challenge this view by arguing that high-level automatic processes (e.g., implicit motivation) decline with age, and that this decline plays an important and as yet unappreciated role in cognitive aging. Specifically, we hypothesized that due to their decline, high-level automatic processes are less likely to be spontaneously activated in old age, and so their subtle, external activation should have stronger effects on older (vs. younger) adults. In two experiments we used different methods of implicitly activating motivation, and measured executive functions of younger and older adults via the Wisconsin Card Sorting Test. In Experiment 1 we used goal priming to subtly increase achievement motivation. In Experiment 2 motivation was manipulated by subtly increasing engagement in the task. More specifically, we introduce the Jerusalem Face Sorting Test (JFST), a modified version of the WCST that uses cards with faces instead of geometric shapes. In both experiments, implicitly induced changes in motivation improved older- but not younger- adults executive functioning. The framework we propose is general, and it has implications as to how we view and test cognitive functions. Our case study of older adults offers a new look at various aspects of cognitive aging. Applications of this view to other special populations (e.g., ADHD, schizophrenia) and possible interventions are discussed.
Hart, Sergiu . Repeat Voting: Two-Vote May Lead More People To Vote. Discussion Papers 2017. Web. Publisher's VersionAbstract
A repeat voting procedure is proposed, whereby voting is carried out in two identical rounds. Every voter can vote in each round, the results of the first round are made public before the second round, and the final result is determined by adding up all the votes in both rounds. It is argued that this simple modification of election procedures may well increase voter participation and result in more accurate and representative outcomes.
Bezalel Peleg, Shmuel Zamir . Sequential Aggregation Of Judgments. Discussion Papers 2017. Web. Publisher's VersionAbstract
We consider a standard model of judgment aggregation as presented, for example, in Dietrich (2015). For this model we introduce a sequential aggregation procedure (SAP) which uses the majority rule as much as possible. The ordering of the issues is assumed to be exogenous. The exact definition of SAP is given in Section 3. In Section 4 we construct an intuitive relevance relation for our model, closely related to conditional entailment. Unlike Dietrich (2015), where the relevance relation is given exogenously as part of the model, we require that the relevance relation be derived from the agenda. We prove that SAP has the property of independence of irrelevant issues (III) with respect to (the transitive closure of) our relevance relation. As III is weaker than the property of proposition-wise independence (PI) we do not run into impossibility results as does List (2004) who incorporates PI in some parts of his analysis. We proceed to characterize SAP by anonymity, restricted monotonicity, local neutrality, restricted agenda property, and independence of past deliberations (see Section 5 for the precise details). Also, we use this occasion to show that Roberts s (1991) characterization of choice by plurality voting can be adapted to our model.
Johannes M¼ller-Trede, Shoham Choshen-Hillel, Meir Barneron Ilan Yaniv . Wisdom Of Crowds In Matters Of Taste, The. Discussion Papers 2017. Web. Publisher's VersionAbstract
Decision makers can often improve the accuracy of their judgments on factual matters by consulting 'crowds  of others for their respective opinions. In this article, we investigate whether decision makers could similarly draw on crowds to improve the accuracy of their judgments about their own tastes and hedonic experiences. We present a theoretical model which states that accuracy gains from consulting a crowds judgments of taste depend on the interplay among taste discrimination, crowd diversity, and the similarity between the crowd s preferences and those of the decision maker. The model also delineates the boundary conditions for such 'crowd wisdom.  Evidence supporting our hypotheses was found in two laboratory studies in which decision makers made judgments about their own enjoyment of musical pieces and short films. Our findings suggest that, although different people may have different preferences and inclinations, their judgments of taste can benefit from the wisdom of crowds.
2016
Nehama, Ilan . Analyzing Games With Ambiguous Player Types Using The Minthenmax Decision Model. Discussion Papers 2016. Web. Publisher's VersionAbstract
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore one needs to model the participants using a different decision model. To this end, we present the MINthenMAX decision model under ambiguity. This model is a refinement of Wald s MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize MINthenMAX as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present here also follows the common practice in computer science of worst-case analysis. Finally, we define and analyze the corresponding equilibrium concept, when all players follow MINthenMAX. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists, and we analyze the equilibria.
Procaccia, Uriel . Corporate Bill Of Rights. Discussion Papers 2016. Web. Publisher's VersionAbstract
Abstract: Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings. This paper explores, from a normative point of view, one of the limits that ought to be imposed on the capacity of corporations to be treated "as if" they had a human nature, their recognition as legitimate bearers of basic human rights. The assertion that corporations, like living persons, are entitled to constitutional protection was famously brought to the fore by a number of recent Supreme Court cases, most notably the Citizens United and the Hobby Lobby cases. In the rational choice analysis that follows this paper reveals that the new jurisprudence emanating from Citizens United may be justified in the relatively insignificant cases of small companies with egalitarian distribution of shares, but ought to be rejected in the more meaningful cases of large public corporations with controlling stockholders. The ruling in Hobby Lobby, on the other hand, can be defended regardless of the size of the corporation or the composition of its owners. In both of these cases it is not the rights of the corporate entity which is truly at stake and the final outcome ought to hinge on the constitutional rights of real human beings.
Uriel Procaccia, Eyal Winter . Corporate Crime And Plea Bargains. Discussion Papers 2016. Web. Publisher's VersionAbstract
Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings, and hence their legal capacity to bear rights and obligations of their own is not universal. This paper explores, from a normative point of view, one of the limits that ought to be set on the capacity of corporations to act "as if" they had a human nature, their capacity to commit crime. Accepted wisdom has it that corporate criminal liability is justified as a measure to deter criminal behavior. Our analysis supports this intuition in one subset of cases, but also reveals that deterrence might in fact be undermined in another subset of cases, especially in an environment saturated with plea bargains involving serious violations of the law.
Elchanan Ben-Porath, Eddie Dekel, Barton L. Lipman . Disclosure And Choice. Discussion Papers 2016. Web. Publisher's VersionAbstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
Ben-Porath, Elchanan . Disclosure And Choice. Discussion Papers 2016. Web. Publisher's VersionAbstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
Navon, Ohad . Evolutionarily Stable Strategies Of Random Games And The Facets Of Random Polytopes. Discussion Papers 2016. Web. Publisher's VersionAbstract
An evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative (mutant) strategies. Unlike Nash equilibria, ESS do not always exist in finite games. In this paper we address the question of what happens when the size of the game increases: does an ESS exist for almost every  large game? We let the entries of an n –- n game matrix be independently randomly chosen according to a symmetrical subexponential distribution F, and study the expected number of ESS with support of size d as n †’ ˆ\v z. In a previous paper by Hart, Rinott and Weiss [6] it was shown that this limit is 1 2 for d = 2. This paper deals with the case of d ¥ 4, and proves the conjecture in [6] (Section 6,c), that the expected number of ESS with support of size d ¥ 4 is 0. Furthermore, it discusses the classic problem of the number of facets of a convex hull of n random points in Rd, and relates it to the above ESS problem. Given a collection of i.i.d. random points, our result implies that the expected number of facets of their convex hull converges to 2d as n †’ ˆ\v z.
SOPHIE BADE, YANNAI A. GONCZAROWSKI . Gibbard-Satterthwaite Success Stories And Obvious Strategyproofness. Discussion Papers 2016. Web. Publisher's VersionAbstract
The Gibbard-Satterthwaite Impossibility Theorem (Gibbard, 1973; Satterthwaite, 1975) holds that dictatorship is the only unanimous and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single peaked preferences, house matching, and of quasilinear preferences, there are appealing unanimous and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, recently introduced by Li (2015). A social choice function is obviously strategyproof implementable (OSP) implementable if even cognitively limited agents can recognize their strategies as weakly dominant. For single-peaked preferences, we characterize the class of OSP-implementable and unanimous social choice rules as dictatorships with safeguards against extremism mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome which is too extreme. Median voting is consequently not OSP-implementable. Indeed the only OSP-implementable quantile rules either choose the minimal or the maximal ideal point. For house matching, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li (2015) shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove.
Hanan Shteingart, Yonatan Loewenstein . Heterogeneous Suppression Of Sequential Effects In Random Sequence Generation, But Not In Operant Learning. Discussion Papers 2016. Web. Publisher's VersionAbstract
There is a long history of experiments, in which participantsare instructed to generate a long sequence of binary random numbers.The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper,we usedgeneralized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, weused logistic regression analysis in order to characterize the temporal sequence of participants’ choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seem irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect area monotonousdecreasing function of the delay, yet these individual sequential effectsare largely averaged outin a population analysis because of heterogeneity.The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation.Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in therandom sequence generation task, different participants adoptdifferent cognitive strategiesto suppress sequential dependencies when generating the “random” sequences.
Keren, Aviv . Logic Of Love, The. Discussion Papers 2016. Web. Publisher's VersionAbstract
This philosophical work lays the groundwork for a game-theoretic account of (romantic) love, substantiating the folk-psychological conception of love as 'a unification of souls'. It does so by setting up an appropriate universal framework of cognitive agency, that accommodates such unifications and motivates them. This framework applies the gene s eye view of evolution to the evolution of cognition, integrating it with a distributed, dynamic theory of selfhood "and the game-theoretic principles of agent-unification that govern these dynamics. The application of this framework to particular biological settings produces love as a theoretical evolutionary prediction (unveiling its rationality). Through this, the connection of the strategic normativity to love's real-life behavioral and phenomenological expressions is systematically explored.
Bar-Hillel, Maya . Reply To Rodway, Schepman & Thoma (2016). Discussion Papers 2016. Web. Publisher's Version
Bezalel Peleg, Shmuel Zamir . Sequential Aggregation Judgments: Logical Derivation Of Relevance Relation. Discussion Papers 2016. Web. Publisher's VersionAbstract
Following Dietrich (2014) we consider using choice by plurality voting (CPV) as a judgment aggregation correspondence. We notice that a result of Roberts (1991) implies that CPV is axiomatically characterized by anonymity, neutrality, unanimity, and (Young s) reinforcement. Following List (2004) and Dietrich (2015) we construct a sequential voting procedure of judgement aggregation which satisfies rationality, anonymity, unanimity, and independence of irrelevant propositions (with respect to a relevance correspondence that does not satisfy transitivity). We offer a tentative characterization for this aggregation procedure
David Heyd, Uriel Procaccia, Uzi Segal . Thorny Quest For A Rational Constitution, The. Discussion Papers 2016. Web. Publisher's Version
2015
Sagi Jaffe-Dax, Ofri Raviv, Nori Jacoby Yonatan Loewenstein Merav Ahissar . A Computational Model Of Implicit Memory Captures Dyslexics Perceptual Deficits. Discussion Papers 2015. Web. Publisher's VersionAbstract
Dyslexics are diagnosed for their poor reading skills. Yet they characteristically also suffer from poor verbal memory, and often from poor auditory skills. To date, this combined profile has been accounted for in broad cognitive terms. Here, we hypothesize that the perceptual deficits associated with dyslexia can be understood computationally as a deficit in integrating prior information with noisy observations. To test this hypothesis we analyzed the performance of human participants in an auditory discrimination task using a two-parameter computational model. One parameter captures the internal noise in representing the current event, and the other captures the impact of recently acquired prior information. Our findings show that dyslexics perceptual deficit can be accounted for by inadequate adjustment of these components; namely, low weighting of their implicit memory of past trials relative to their internal noise. Underweighting the stimulus statistics decreased dyslexics ability to compensate for noisy observations. ERP measurements (P2 component) while participants watched a silent movie, indicated that dyslexics perceptual deficiency may stem from poor automatic integration of stimulus statistics. Taken together, this study provides the first description of a specific computational deficit associated with dyslexia.
Aumann, Yonatan . A Conceptual Foundation For The Theory Of Risk Aversion. Discussion Papers 2015. Web. Publisher's VersionAbstract
Classically, risk aversion is equated with concavity of the utility function. In this work we explore the conceptual foundations of this definition. In accordance with neo-classical economics, we seek an ordinal definition, based on the decisions maker s preference order, independent of numerical values. We present two such definitions, based on simple, conceptually appealing interpretations of the notion of risk-aversion. We then show that when cast in quantitative form these ordinal definitions coincide with the classical Arrow-Pratt definition (once the latter is defined with respect to the appropriate units), thus providing a conceptual foundation for the classical definition. The implications of the theory are discussed, including, in particular, to the understanding of insurance. The entire study is within the expected utility framework.