Publications

2012
Judith Avrahami Einav Hart, Yaakov Kareev. 2012. “Reversal of Risky Choice in a Good Versus a Bad World”. Publisher's Version Abstract
In many situations one has to choose between risky alternatives, knowing only one's past experience with those alternatives. Such decisions can be made in more - or less - benevolent settings or 'worlds'. In a 'good world', high payoffs are more frequent than low payoffs, and vice versa in a 'bad world'. In two studies, we explored whether the world influences choice behavior: Whether people behave differently in a 'good' versus a 'bad' world. Subjects made repeated, incentivized choices between two gambles, one riskier than the other, neither offering a sure amount. The gambles were held equivalent in terms of their expected value, differing only in variance. Worlds were manipulated both between- and within-subject: In Study 1, each subject experienced one world - good, bad or mediocre; in Study 2, each subject experienced both a good and a bad world. We examine the aggregate pattern of behavior (average choice frequencies), and the dynamics of behavior across time. We observed significant differences in the aggregate pattern: In a good world, subjects tended to choose the riskier alternative, and vice versa in a bad world. The pattern of the dynamics, i.e., the transitions from round to round, were best explained by a reaction to the counterfactual reward: When the unchosen alternative yielded a better payoff, the tendency to subsequently choose it was higher. We compared these two patterns to the predictions of three types of models: Reinforcement learning, regret-based and disappointment-based models. Behavior was in line only with the predictions of regret-based models.
Aumann–Serrano (2008) and Foster–Hart (2009) suggest two new riskiness measures, each of which enables one to elicit a complete and objective ranking of gambles according to their riskiness.'Hart (2011) shows that both measures can be obtained by looking at a large set of utilityfunctions and applying "uniform rejection criteria" to rank the gambles in accordance with this'set of utilities. We use the same "uniform rejection criteria" to extend these two riskiness'measures to the realm of uncertainty and develop complete and objective rankings of sets ofgambles, which arise naturally in models of decision making under uncertainty.
Yonatan Loewenstein Hanan Shteingart. 2012. “Role of First Impression in Operant Learning, The”. Publisher's Version Abstract
We quantified the effect of first experience on behavior in operant learning and studied its underlying computational principles. To that goal, we analyzed more than 200,000 choices in a repeated-choice experiment. We found that the outcome of the first experience has a substantial and lasting effect on participants' subsequent behavior, which we term outcome primacy. We found that this outcome primacy can account for much of the underweighting of rare events, where participants apparently underestimate small probabilities. We modeled behavior in this task using a standard, model-free reinforcement learning algorithm. In this model, the values of the different actions are learned over time and are used to determine the next action according to a predefined action-selection rule. We used a novel non-parametric method to characterize this action-selection rule and showed that the substantial effect of first experience on behavior is consistent with the reinforcement learning model if we assume that the outcome of first experience resets the values of the experienced actions, but not if we assume arbitrary initial conditions. Moreover, the predictive power of our resetting model outperforms previously published models regarding the aggregate choice behavior. These findings suggest that first experience has a disproportionately large effect on subsequent actions, similar to primacy effects in other fields of cognitive psychology. The mechanism of resetting of the initial conditions which underlies outcome primacy may thus also account for other forms of primacy.
Eyal Winter Mikel Alvarez-Mozos, Ziv Hellman. 2012. “Spectrum Value for Coalitional Games”. Publisher's Version Abstract
Assuming a `spectrum' or ordering on the players of a coalitional game, as in a political spectrum in a parliamentary situation, we consider a variation of the Shapley value in which coalitions may only be formed if they are connected with respect to the spectrum. This results in a naturally asymmetric power index in which positioning along the spectrum is critical. We present both a characterisation of this value by means of properties and combinatoric formulae for calculating it. In simple majority games, the greatest power accrues to `moderate' players who are located neither at the extremes of the spectrum nor in its centre. In supermajority games, power increasingly accrues towards the extremes, and in unaninimity games all power is held by the players at the extreme of the spectrum.
Judith Avrahami Ilana Ritov Amos Schurr, Yaakov Kareev. 2012. “Taking the Broad Perspective: Risky Choices in Repeated Proficiency Tasks”. Publisher's Version Abstract
In performing skill-based tasks individuals often face a choice between easier, less demanding alternatives, but ones whose expected payoffs in case of success are lower, and difficult, more demanding alternatives whose expected payoffs in case of success are higher: What piece to play in a musical competition, whether to operate a camera in a manual or automatic mode, etc. We maintain that the decision-maker's perspective - whether narrow or broad - is one determinant of choice, and subsequent satisfaction, in such tasks. In two experiments involving dart throwing and answering general-knowledge trivia questions, perspective was manipulated through choice procedure: A sequential choice procedure, with task difficulty chosen one at a time, was used to induce a narrow perspective while an aggregate-choice procedure was used to induce a broad perspective. In two additional experiments, both involving a sequential-choice procedure perspective was manipulated through priming. As predicted, in all experiments inducement of a narrow perspective resulted in a higher probability of choosing the more difficult task; it also led to lower-than-anticipated overall satisfaction.
The paper analyzes a perturbation on the players' knowledge of the game in the traveler's dilemma, by introducing some uncertainty about the range of admissible actions. The ratio between changes in the outcomes and the size of perturbation is shown to grow exponentially in the range of the given game. This is consistent with the intuition that a wider range makes the outcome of the traveler's dilemma more paradoxical. We compare this with the growth of the elasticity index (Bavly (2011)) of this game.
We prove that a single-valued solution of perfectly competitive TU economies underling nonatomic exact market games is uniquely determined as the Mertens value by four plausible value-related axioms. Since the Mertens value is always a core element, this result provides an axiomatization of the Mertens value as a core-selection. Previous works in this direction assumed the economies to be either di erentiable (e.g., Dubey and Neyman [9]) or of uniform nite-type (e.g., Haimanko [14]). Our work does not assume that, thus it contributes to the axiomatic study of payo s in perfectly competitive economies (or values of their derived market games) in general. In fact, this is the rst contribution in this direction.
We introduce ideas and methods from distribution theory into value theory. This novel approach'enables us to construct new diagonal formulas for the Mertens value and the Neyman value on a'large space of non-differentiable games. This in turn enables us to give an affirmative answer to'the question, first posed by Neyman, whether the Mertens value and the Neyman value coincide'"modulo Banach limits"? The solution is an intermediate result towards a characterization of'values of norm 1 of vector measure games with bounded variation.
We investigated how perspective-taking might be used to overcome bias and improve advice-based judgments. Decision makers often tend to underweight the opinions of others relative to their own, and thus fail to exploit the wisdom of others. We tested the idea that decision makers taking the perspective of another person engage a less egocentric mode of processing of advisory opinions and thereby improve their accuracy. In Studies 1-2, participants gave their initial opinions and then considered a sample of advisory opinions in two conditions. In one condition (self-perspective), they were asked to give their best advice-based estimates. In the second (other-perspective), they were asked to give advice-based estimates from the perspective of another judge. The dependent variables were the participants' accuracy and indices that traced their judgment policy. In the self-perspective condition participants adhered to their initial opinions, whereas in the other-perspective condition they were far less egocentric, weighted the available opinions more equally and produced more accurate estimates. In Study 3, initial estimates were not elicited, yet the data patterns were consistent with these conclusions. All the studies suggest that switching perspectives allows decision makers to generate advice-based judgments that are superior to those they would otherwise have produced. We discuss the merits of perspective-taking as a procedure for correcting bias, suggesting that it is theoretically justifiable, practicable, and effective.
2011
The term social preference  refers to decision makers satisfaction with their own outcomes and those attained by comparable others. The present research was inspired by what appears to be a discrepancy in the literature on social preferences "specifically, between a class of studies demonstrating people s concern with inequality and others documenting their motivation to increase social welfare. We propose a theoretical framework to account for this puzzling difference. In particular, we argue that a characteristic of the decision setting "an individual s role in creating the outcomes, referred to as agency "critically affects decision makers weighting of opposing social motives. Namely, in settings where people can merely judge the outcomes, but cannot affect them ( low agency ), their concern with inequality figures prominently. In contrast, in settings where people determine the outcomes for themselves and others ( high agency ), their concern with the welfare of others is prominent. Three studies employing a new salary-allocation paradigm document a robust effect of agency. In the high-agency condition participants had to assign salaries, while in the low-agency condition they indicated their satisfaction with equivalent predetermined salaries. We found that compared with low-agency participants, high-agency participants were less concerned with disadvantageous salary allocations and were even willing to sacrifice a portion of their pay to better others outcomes. The effects of agency are discussed in connection to inequality aversion, social comparison, prosocial behavior, and preference construction.
In this paper we analyze judgement aggregation problems in which a group of agents independently votes on a set of complex propositions that has some interdependency constraint between them(e.g., transitivity when describing preferences). We consider the issue of judgement aggregation from the perspective of approximation. That is, we generalize the previous results by studying approximate judgement aggregation. We relax the main two constraints assumed in the current literature, Consistency and Independence and consider mechanisms that only approximately satisfy these constraints, that is, satisfy them up to a small portion of the inputs. The main question we raise is whether the relaxation of these notions significantly alters the class of satisfying aggregation mechanisms. The recent works for preference aggregation of Kalai, Mossel, and Keller fit into this framework. The main result of this paper is that, as in the case of preference aggregation, in the case of a subclass of a natural class of aggregation problems termed `truth-functional agendas', the set of satisfying aggregation mechanisms does not extend non-trivially when relaxing the constraints. Our proof techniques involve Boolean Fourier transform and analysis of voter influences for voting protocols.The question we raise for Approximate Aggregation can be stated in terms of Property Testing. For instance, as a corollary from our result we get a generalization of the classic result for property testing of linearity of Boolean functions.
Itai Arieli Babichenko and Yakov. 2011. “Average Testing and the Efficient Boundary”. Publisher's Version Abstract
We propose a simple adaptive procedure for playing strategic games: average testing. In this procedure each player sticks to her current strategy if it yields a payoff that exceeds her average payoff by at least some fixed epsilon > 0; otherwise she chooses a strategy at random. We consider generic two-person games where both players play according to the average testing procedure on blocks of k-periods. We demonstrate that for all k large enough, the pair of time-average payoffs converges (almost surely) to the 3epsilon-Pareto efficient boundary.
Uriel Procaccia Maya Bar-Hillel. 2011. “Behavioral Economics and the Law (in Hebrew)”. Publisher's Version
Maya Bar-Hillel Ziv Carmon Moty Amar, Dan Ariely and Chezy Ofir. 2011. “Brand Names Act Like Marketing Placebos”. Publisher's Version Abstract
This research illustrates the power of reputation, such as that embodied in brand names, demonstrating that names can enhance objective product efficacy. Study participants facing a glaring light were asked to read printed words as accurately and as quickly as they could, receiving compensation proportional to their performance. Those wearing sunglasses tagged Ray-Ban made fewer errors, yet read more quickly, than those wearing the identical pair of sunglasses when tagged Mango (a less prestigious brand). Similarly, ear-muffs blocked noise more effectively, and chamomile tea improved mental focus more, when otherwise identical target products carried more reputable names.
The traditional premise of criminal law is that criminals who are convicted of similar crimes under similar circumstances ought to be subject to identical sentences. This article provides an efficiency-based rationale for discriminatory sentencing, i.e., establishes circumstances under which identical crimes ought to be subject to differential sentencing. We also establish the relevance of this finding to the practices of sentencing and, in particular, to the Sentencing Guidelines. Most significantly, we establish that the model can explain why celebrities, leaders, or recidivists ought to be subject to harsher sanctions than others. Discriminatory sentencing is optimal when criminals confer positive externalities on each other. If a criminal A who imposes (non-reciprocal) large positive externalities on criminal B is punished sufficiently harshly, B would expect A not to commit the crime and consequently, he would expect not to benefit from the positive externalities conferred on him by A. Given that B's expected benefits are lower, the sanctions sufficient to deter B are also lower than the ones imposed on A. The result can be easily extended to the case of reciprocal externalities. Assume that a criminal A imposes positive externalities on B and B imposes identical positive externalities on A. If A is subject to a sufficiently harsh sanction and B knows this, B would expect A not to perform the crime and therefore would expect not to benefit from the positive externalities otherwise conferred on B. Consequently, a more lenient sanction than the sanction imposed on A would be sufficient to deter B.
Edna Ullmann-Margalit. 2011. “Considerateness”. Publisher's Version Abstract
A stranger entering the store ahead of you may hold the door open so it does not slam in your face, or your daughter may tidy up the kitchen when she realizes that you are very tired: both act out of considerateness. In acting considerately one takes others into consideration. The considerate act aims at contributing to the wellbeing of somebody else at a low cost to oneself.Focusing on the extreme poles of the spectrum of human relationships, I argue that considerateness is the foundation upon which our relationships are to be organized in both the thin, anonymous context of the public space and the thick, intimate context of the family.The first part of the paper, sections I “III, explores the idea that considerateness is the minimum that we owe to one another in the public space. By acting considerately toward strangers we show respect to that which we share as people, namely, to our common humanity. The second part, sections IV “VIII, explores the idea that the family is constituted on a foundation of considerateness. Referring to the particular distribution of domestic burdens and benefits adopted by each family as its family deal,  I argue that the considerate family deal embodies a distinct, family-oriented notion of fairness.The third part, sections IX “XV, takes up the notion of family fairness, contrasting it with justice. In particular I take issue with Susan Okin's notion of the just family. Driving a wedge between justice and fairness, I propose an idea of family fairness that is partial and sympathetic rather than impartial and empathic, particular and internal rather than generalizable, and based on ongoing comparisons of preferences among family members. I conclude by characterizing the good family as the not-unjust family that is considerate and fair.
Two agents independently choose mixed m-recall strategies that take actions in finite action spaces A1 and A2. The strategies induce a random play, a1,a2,..., where at assumes values in A1 X A2. An M-recall observer observes the play. The goal of the agents is to make the observer believe that the play is similar to a sequence of i.i.d. random actions whose distribution is Q in Delta(A1 X A2). For nearly every t, the following event should occur with probability close to one: "the distribution of a_t+M given at a_t,..,a_t+M is close to Q." We provide a sufficient and necessary condition on m, M, and Q under which this goal can be achieved (for large m). This work is a step in the direction of establishing a folk theorem for repeated games with bounded recall. It tries to tackle the difficulty in computing the individually rational levels (IRL) in the bounded recall setting. Our result implies, for example, that in some games the IRL in the bounded recall game is bounded away below the IRL in the stage game, even when all the players have the same recall capacity.
The ability to detect a change, to accurately assess the magnitude of the change, and to react to that change in a commensurate fashion are of critical importance in many decision domains. Thus, it is important to understand the factors that systematically affect people's reactions to change. In this article we document a novel effect: Decision makers' reactions to a change (e.g., a visual change, a technology change) were systematically affected by the type of categorizations they encountered in an unrelated prior task (e.g., the response categories associated with a survey question). We found that prior exposure to narrow, as opposed to broad, categorizations improved decision makers' ability to detect change and led to stronger reactions to a given change. These differential reactions occurred because the prior categorizations, even though unrelated, altered the extent to which the subsequently presented change was perceived as either a relatively large change or a relatively small one.
Noam Bar-Shai, Tamar Keasar and Avi Shmida. 2011. “Do Solitary Bees Count to Five?”. Publisher's Version Abstract
Efficient foragers avoid returning to food sources that they had previously depleted. Bombus terrestris bumblebees use a counting-like strategy to leave Alcea setosa flowers just after visiting all of their five nectaries. We tested whether a similar strategy is employed by solitary Eucera sp. bees that also forage on A. setosa. Analyses of 261 video-recorded flower visits showed that the bees most commonly probed five nectaries, but occasionally (in 7.8% of visits) continued to a nectary they had already visited. Probing durations that preceded flower departures were generally shorter than probings that were followed by an additional nectary visit in the same flower. Assuming that probing durations correlate with nectar volumes, this suggests that flower departure frequencies increased after probing of low-rewarding nectaries. The flowers' spatial attributes were not used as departure cues, but the bees may have left flowers in response to scent marks on previously visited nectaries. We conclude that Eucera females do not exhibit numerical competence as a mechanism for efficient patch use, but rather a combination of a reward-based leaving rule and scent-marking. The bees' foraging pattern is compatible with Waage's (1979, Journal of Animal Ecology, 48, 353-371) patch departure rule, which states that the tendency to leave a foraging patch increases with time, and decreases when food items are encountered. Thus, Eucera resemble bumblebees in avoiding most revisits to already-visited nectaries, but use a different foraging strategy to do so. This difference may reflect lower learning capabilities of solitary bee species compared to social ones.
We develop an elasticity index of a strategic game. The index measures the robustness of the set of rational outcomes of a game. The elasticity index of a game is the maximal ratio between the change of the rational outcomes and the size of an infinitesimal perturbation. The perturbation is on the players' knowledge of the game.The elasticity of a strategic game is a nonnegative number. A small elasticity is indicative of the robustness of the rational outcomes (for example, if there is only one player the elasticity is 0), and a large elasticity is indicative of non-robustness. For example, the elasticity of the (normalized) n-stage finitely repeated prisoner's dilemma is at least exponential in n, as is the elasticity of the n-stage centipede game and the n-ranged traveler's dilemma. The concept of elasticity enables us to look from a different perspective at Neyman's (1999) repeated games when the number of repetitions is not commonly known, and Aumann's (1992) demonstration of the effect of irrationality perturbations.