Publications

2012
The classical Bomber problem concerns properties of the optimal allocation policy of arsenal for an airplane equipped with a given number, n, of anti-aircraft missiles, at a distance t > 0 from its destination, which is intercepted by enemy planes appearing according to a homogeneous Poisson process. The goal is to maximize the probability of reaching its destination. The Fighter problem deals with a similar situation, but the goal is to shoot down as many enemy planes as possible. The optimal allocation policies are dynamic, depending upon the times at which the enemy is met. The present paper generalizes these problems by allowing the number of enemy planes to have any distribution, not just Poisson. This implies that the optimal strategies can no longer be dynamic, and are, in our terminology, offline. We show that properties similar to those holding for the classical problems hold also in the present case. Whether certain properties hold that remain open questions in the dynamic version are resolved in the offline version. Since `time' is no longer a meaningful way to parametrize the distributions for the number of encounters, other more general orderings of distributions are needed. Numerical comparisons between the dynamic and offliine approaches are given.
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the contraction bias , in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.
Consider the problem of maximizing the revenue from selling a number of goods to a single buyer. We show that, unlike the case of one good, when the buyer's values for the goods increase the seller's maximal revenue may well decrease. We also provide a characterization of revenue-maximizing mechanisms (more generally, of "seller-favorable" mechanisms) that circumvents nondifferentiability issues. Finally, through simple and transparent examples, we clarify the need for and the use of randomization when maximizing revenue in the multiple-goods versus the one-good case.
The classical secretary problem for selecting the best item is studied when the actual values of the items are observed with noise. One of the main appeals of the secretary problem is that the optimal strategy is able to find the best observation with the nontrivial probability of about 0.37, even when the number of observations is arbitrarily large. The results are strikingly different when the quality of the secretaries are observed with noise. If there is no noise, then the only information that is needed is whether an observation is the best among those already observed. Since observations are assumed to be i.i.d. this is distribution free. In the case of noisy data, the results are no longer distrubtion free. Furthermore, one needs to know the rank of the noisy observation among those already seen. Finally, the probability of finding the best secretary often goes to 0 as the number of obsevations, n, goes to infinity. The results depend heavily on the behavior of pn, the probability that the observation that is best among the noisy observations is also best among the noiseless observations. Results involving optimal strategies if all that is available is noisy data are described and examples are given to elucidate the results.
Richard P. Ebstein Salomon Israel, Ori Weisel and Gary Bornstein. 2012. “Oxytocin, but Not Vasopressin, Increases Both Parochial and Universal Altruism”. Publisher's Version Abstract
In today's increasingly interconnected world, deciding with whom and at what level to cooperatebecomes a matter of increasing importance as societies become more globalized and large-scalecooperation becomes a viable means of addressing global issues. This tension can play out viacompetition between local (e.g. within a group) and global (e.g., between groups) interests. Despiteresearch highlighting factors influencing cooperation in such multi-layered situations, theirbiological basis is not well understood. In a double-blind placebo controlled study, we investigatedthe influence of intranasally administered oxytocin and arginine vasopressin on cooperativebehavior at local and global levels. We find that oxytocin causes an increase in both thewillingness to cooperate and the expectation that others will cooperate at both levels. In contrast,participants receiving vasopressin did not differ from those receiving placebo in their cooperativebehavior. Our results highlight the selective role of oxytocin in intergroup cooperative behavior.
We prove that a single-valued solution of perfectly competitive TU economies underling nonatomic vector measure market games is uniquely determined as the Mertens (1988) value by four plausible value-related axioms. Since the Mertens value is always in the core of an economy, this result provides an axiomatization of the Mertens value as a core-selection. Previous works on this matter assumed the economies to be either differentiable (e.g., Dubey and Neyman (1984)) or of uniform finite type (e.g., Haimanko (2002). This work does not assume that, thus it contributes to the axiomatic study of payoffs in perfectly competitive economies in general.
The problem of disagreement asks about the appropriate response (typically the response of a peer) upon encountering a disagreement between peers. The responses proposed in the literature offer different solutions to the problem, each of which has more or less normative appeal. Yet none of these seems to engage with what seems to be the real problem of disagreement. It is my aim in this paper to highlight what I think the real problem of disagreement is. It is, roughly, the problem of deciding whether a revisionary tactic is appropriate following the discovery of disagreement as well as deciding which revisionary tactic is appropriate. This, I will show, is a slippery and inevitable problem that any discussion of disagreement ought to deal with.
Among the single-valued solution concepts studied in cooperative game theory and economics, those which are also positive projections play an important role. The value, semivalues, and quasivalues of a cooperative game are several examples of solution concepts which are positive projections. These solution concepts are known to have many important applications in economics. In many applications the specific positive projection discussed is represented as an expectation of marginal contributions of agents to "random" coalitions. Usually these representations are used to characterize positive projections obeying certain additional axioms. It is thus of interest to study the representation theory of positive projections and its relation with some common axioms. We study positive projections defined over certain spaces of nonatomic Lipschitz vector measure games. To this end, we develop a general notion of "calculus" for such games, which in a manner extends the notion of the Radon-Nykodim derivative for measures. We prove several representation results for positive projections, which essentially state that the image of a game under the action of a positive projection can be represented as an averaging of its derivative w.r.t. some vector measure. We then introduce a specific calculus for the space $mathcalCON$ generated by concave, monotonically nondecreasing, and Lipschitz continuous functions of finitely many nonatomic probability measures. We study in detail the properties of the resulting representations of positive projections on $mathcalCON$ and especially those of values on $mathcalCON$. The latter results are of great importance in various applications in economics.
Judith Avrahami Einav Hart, Yaakov Kareev. 2012. “Reversal of Risky Choice in a Good Versus a Bad World”. Publisher's Version Abstract
In many situations one has to choose between risky alternatives, knowing only one's past experience with those alternatives. Such decisions can be made in more - or less - benevolent settings or 'worlds'. In a 'good world', high payoffs are more frequent than low payoffs, and vice versa in a 'bad world'. In two studies, we explored whether the world influences choice behavior: Whether people behave differently in a 'good' versus a 'bad' world. Subjects made repeated, incentivized choices between two gambles, one riskier than the other, neither offering a sure amount. The gambles were held equivalent in terms of their expected value, differing only in variance. Worlds were manipulated both between- and within-subject: In Study 1, each subject experienced one world - good, bad or mediocre; in Study 2, each subject experienced both a good and a bad world. We examine the aggregate pattern of behavior (average choice frequencies), and the dynamics of behavior across time. We observed significant differences in the aggregate pattern: In a good world, subjects tended to choose the riskier alternative, and vice versa in a bad world. The pattern of the dynamics, i.e., the transitions from round to round, were best explained by a reaction to the counterfactual reward: When the unchosen alternative yielded a better payoff, the tendency to subsequently choose it was higher. We compared these two patterns to the predictions of three types of models: Reinforcement learning, regret-based and disappointment-based models. Behavior was in line only with the predictions of regret-based models.
Aumann–Serrano (2008) and Foster–Hart (2009) suggest two new riskiness measures, each of which enables one to elicit a complete and objective ranking of gambles according to their riskiness.'Hart (2011) shows that both measures can be obtained by looking at a large set of utilityfunctions and applying "uniform rejection criteria" to rank the gambles in accordance with this'set of utilities. We use the same "uniform rejection criteria" to extend these two riskiness'measures to the realm of uncertainty and develop complete and objective rankings of sets ofgambles, which arise naturally in models of decision making under uncertainty.
Yonatan Loewenstein Hanan Shteingart. 2012. “Role of First Impression in Operant Learning, The”. Publisher's Version Abstract
We quantified the effect of first experience on behavior in operant learning and studied its underlying computational principles. To that goal, we analyzed more than 200,000 choices in a repeated-choice experiment. We found that the outcome of the first experience has a substantial and lasting effect on participants' subsequent behavior, which we term outcome primacy. We found that this outcome primacy can account for much of the underweighting of rare events, where participants apparently underestimate small probabilities. We modeled behavior in this task using a standard, model-free reinforcement learning algorithm. In this model, the values of the different actions are learned over time and are used to determine the next action according to a predefined action-selection rule. We used a novel non-parametric method to characterize this action-selection rule and showed that the substantial effect of first experience on behavior is consistent with the reinforcement learning model if we assume that the outcome of first experience resets the values of the experienced actions, but not if we assume arbitrary initial conditions. Moreover, the predictive power of our resetting model outperforms previously published models regarding the aggregate choice behavior. These findings suggest that first experience has a disproportionately large effect on subsequent actions, similar to primacy effects in other fields of cognitive psychology. The mechanism of resetting of the initial conditions which underlies outcome primacy may thus also account for other forms of primacy.
Eyal Winter Mikel Alvarez-Mozos, Ziv Hellman. 2012. “Spectrum Value for Coalitional Games”. Publisher's Version Abstract
Assuming a `spectrum' or ordering on the players of a coalitional game, as in a political spectrum in a parliamentary situation, we consider a variation of the Shapley value in which coalitions may only be formed if they are connected with respect to the spectrum. This results in a naturally asymmetric power index in which positioning along the spectrum is critical. We present both a characterisation of this value by means of properties and combinatoric formulae for calculating it. In simple majority games, the greatest power accrues to `moderate' players who are located neither at the extremes of the spectrum nor in its centre. In supermajority games, power increasingly accrues towards the extremes, and in unaninimity games all power is held by the players at the extreme of the spectrum.
Judith Avrahami Ilana Ritov Amos Schurr, Yaakov Kareev. 2012. “Taking the Broad Perspective: Risky Choices in Repeated Proficiency Tasks”. Publisher's Version Abstract
In performing skill-based tasks individuals often face a choice between easier, less demanding alternatives, but ones whose expected payoffs in case of success are lower, and difficult, more demanding alternatives whose expected payoffs in case of success are higher: What piece to play in a musical competition, whether to operate a camera in a manual or automatic mode, etc. We maintain that the decision-maker's perspective - whether narrow or broad - is one determinant of choice, and subsequent satisfaction, in such tasks. In two experiments involving dart throwing and answering general-knowledge trivia questions, perspective was manipulated through choice procedure: A sequential choice procedure, with task difficulty chosen one at a time, was used to induce a narrow perspective while an aggregate-choice procedure was used to induce a broad perspective. In two additional experiments, both involving a sequential-choice procedure perspective was manipulated through priming. As predicted, in all experiments inducement of a narrow perspective resulted in a higher probability of choosing the more difficult task; it also led to lower-than-anticipated overall satisfaction.
The paper analyzes a perturbation on the players' knowledge of the game in the traveler's dilemma, by introducing some uncertainty about the range of admissible actions. The ratio between changes in the outcomes and the size of perturbation is shown to grow exponentially in the range of the given game. This is consistent with the intuition that a wider range makes the outcome of the traveler's dilemma more paradoxical. We compare this with the growth of the elasticity index (Bavly (2011)) of this game.
We prove that a single-valued solution of perfectly competitive TU economies underling nonatomic exact market games is uniquely determined as the Mertens value by four plausible value-related axioms. Since the Mertens value is always a core element, this result provides an axiomatization of the Mertens value as a core-selection. Previous works in this direction assumed the economies to be either di erentiable (e.g., Dubey and Neyman [9]) or of uniform nite-type (e.g., Haimanko [14]). Our work does not assume that, thus it contributes to the axiomatic study of payo s in perfectly competitive economies (or values of their derived market games) in general. In fact, this is the rst contribution in this direction.
We introduce ideas and methods from distribution theory into value theory. This novel approach'enables us to construct new diagonal formulas for the Mertens value and the Neyman value on a'large space of non-differentiable games. This in turn enables us to give an affirmative answer to'the question, first posed by Neyman, whether the Mertens value and the Neyman value coincide'"modulo Banach limits"? The solution is an intermediate result towards a characterization of'values of norm 1 of vector measure games with bounded variation.
We investigated how perspective-taking might be used to overcome bias and improve advice-based judgments. Decision makers often tend to underweight the opinions of others relative to their own, and thus fail to exploit the wisdom of others. We tested the idea that decision makers taking the perspective of another person engage a less egocentric mode of processing of advisory opinions and thereby improve their accuracy. In Studies 1-2, participants gave their initial opinions and then considered a sample of advisory opinions in two conditions. In one condition (self-perspective), they were asked to give their best advice-based estimates. In the second (other-perspective), they were asked to give advice-based estimates from the perspective of another judge. The dependent variables were the participants' accuracy and indices that traced their judgment policy. In the self-perspective condition participants adhered to their initial opinions, whereas in the other-perspective condition they were far less egocentric, weighted the available opinions more equally and produced more accurate estimates. In Study 3, initial estimates were not elicited, yet the data patterns were consistent with these conclusions. All the studies suggest that switching perspectives allows decision makers to generate advice-based judgments that are superior to those they would otherwise have produced. We discuss the merits of perspective-taking as a procedure for correcting bias, suggesting that it is theoretically justifiable, practicable, and effective.
2011
Pathological Altruism
Ariel Knafo, Barbara Oakley, Guruprasad Madhavan, and David Sloan Wilson. 1/2011. Pathological Altruism. Oxford University Press. Abstract

The benefits of altruism and empathy are obvious. These qualities are so highly regarded and embedded in both secular and religious societies that it seems almost heretical to suggest they can cause harm. Like most good things, however, altruism can be distorted or taken to an unhealthy extreme. Pathological Altruism presents a number of new, thought-provoking theses that explore a range of hurtful effects of altruism and empathy. Pathologies of empathy, for example, may trigger depression as well as the burnout seen in healthcare professionals. The selflessness of patients with eating abnormalities forms an important aspect of those disorders. Hyperempathy - an excess of concern for what others think and how they feel - helps explain popular but poorly defined concepts such as codependency. In fact, pathological altruism, in the form of an unhealthy focus on others to the detriment of one's own needs, may underpin some personality disorders. Pathologies of altruism and empathy not only underlie health issues, but also a disparate slew of humankind's most troubled features, including genocide, suicide bombing, self-righteous political partisanship, and ineffective philanthropic and social programs that ultimately worsen the situations they are meant to aid. Pathological Altruism is a groundbreaking new book - the first to explore the negative aspects of altruism and empathy, seemingly uniformly positive traits. The contributing authors provide a scientific, social, and cultural foundation for the subject of pathological altruism, creating a new field of inquiry. Each author's approach points to one disturbing truth: what we value so much, the altruistic "good" side of human nature, can also have a dark side that we ignore at our peril.

The term social preference  refers to decision makers satisfaction with their own outcomes and those attained by comparable others. The present research was inspired by what appears to be a discrepancy in the literature on social preferences "specifically, between a class of studies demonstrating people s concern with inequality and others documenting their motivation to increase social welfare. We propose a theoretical framework to account for this puzzling difference. In particular, we argue that a characteristic of the decision setting "an individual s role in creating the outcomes, referred to as agency "critically affects decision makers weighting of opposing social motives. Namely, in settings where people can merely judge the outcomes, but cannot affect them ( low agency ), their concern with inequality figures prominently. In contrast, in settings where people determine the outcomes for themselves and others ( high agency ), their concern with the welfare of others is prominent. Three studies employing a new salary-allocation paradigm document a robust effect of agency. In the high-agency condition participants had to assign salaries, while in the low-agency condition they indicated their satisfaction with equivalent predetermined salaries. We found that compared with low-agency participants, high-agency participants were less concerned with disadvantageous salary allocations and were even willing to sacrifice a portion of their pay to better others outcomes. The effects of agency are discussed in connection to inequality aversion, social comparison, prosocial behavior, and preference construction.
In this paper we analyze judgement aggregation problems in which a group of agents independently votes on a set of complex propositions that has some interdependency constraint between them(e.g., transitivity when describing preferences). We consider the issue of judgement aggregation from the perspective of approximation. That is, we generalize the previous results by studying approximate judgement aggregation. We relax the main two constraints assumed in the current literature, Consistency and Independence and consider mechanisms that only approximately satisfy these constraints, that is, satisfy them up to a small portion of the inputs. The main question we raise is whether the relaxation of these notions significantly alters the class of satisfying aggregation mechanisms. The recent works for preference aggregation of Kalai, Mossel, and Keller fit into this framework. The main result of this paper is that, as in the case of preference aggregation, in the case of a subclass of a natural class of aggregation problems termed `truth-functional agendas', the set of satisfying aggregation mechanisms does not extend non-trivially when relaxing the constraints. Our proof techniques involve Boolean Fourier transform and analysis of voter influences for voting protocols.The question we raise for Approximate Aggregation can be stated in terms of Property Testing. For instance, as a corollary from our result we get a generalization of the classic result for property testing of linearity of Boolean functions.