Publications

2012
We prove that every continuous value on a space of vector measure market games $Q$, containing the space of nonatomic measures $NA$, has the textitconic property, i.e., if a game $vin Q$ coincides with a nonatomic measure $nu$ on a conical diagonal neighborhood then $varphi(v)=nu$. We deduce that every continuous value on the linear space $mathcal M$, spanned by all vector measure market games, is determined by its values on $mathcalLM$ - the space of vector measure market games which are Lipschitz functions of the measures.
Every continuous-time stochastic game with finitely many states and actions has a uniform andlimiting-average equilibrium payoff.
We study non-zero-sum continuous-time stochastic games, also known as continuous-time Markov games, of fixed duration. We concentrate on Markovian strategies. We show by way of example that equilibria need not exist in Markovian strategies, but they always exist in Markovian public-signal correlated strategies. To do so, we develop criteria for a strategy profile to be an equilibrium via differential inclusions, both directly and also by modeling continuous-time stochastic as differential games and using the Hamilton-Jacobi-Bellman equations. We also give an interpretation of equilibria in mixed strategies in continuous-time, and show that approximate equilibria always exist.
We show that the no betting characterisation of the existence of common priors over finite type spaces extends only partially to improper priors in the countably infinite state space context: the existence of a common prior implies the absence of a bounded agreeable bet, and the absence of a common improper prior implies the existence of a bounded agreeable bet. However, a type space that lacks a common prior but has a common improper prior may or may not have a bounded agreeable bet. The iterated expectations characterisation of the existence of common priors extends almost as is, as a sufficient and necessary condition, from finite spaces to countable spaces, but fails to serve as a characterisation of common improper priors. As a side-benefit of the proofs here, we also obtain a constructive proof of the no betting characterisation in finite spaces.
A population that can be joined at a known sequence of discrete times is sampled cross-sectionally, and the sojourn times of individuals in the sample are observed. It is well known that cross-sectioning leads to length-bias, but less well known that it may result also in dependence among the observations, which is often ignored. It is therefore important to understand and to account for this dependence when estimating the distribution of sojourn times in the population.In this paper, we study conditions under which observed sojourn times are independent and conditions under which treating observations as independent, using the product of marginals in spite of dependence, results in proper inference. The latter is known as the Composite Likelihood approach. We study parametric and nonparametric inference based on Composite Likelihood, and provide conditions for consistency, and further asymptotic properties, including normal and non-normal distributional limits of estimators. We show that Composite Likelihood leads to good estimators under certain conditions, and illustrate that it may fail without them. The theoretical study is supported by simulations. We apply the proposed methods to two data sets collected by cross-sectional designs: data on hospitalization time after bowel and hernia surgeries, and data on service times at our university.
We study conditions relating to the impossibility of agreeing to disagree in models of interactive KD45 belief (in contrast to models of S5 knowledge, which are used in nearly all the agreements literature). Agreement and disagreement are studied under models of belief in three broad settings: non-probabilistic decision models, probabilistic belief revision of priors, and dynamic communication among players. We show that even when the truth axiom is not assumed it turns out that players will find it impossible to agree to disagree under fairly broad conditions.
We present a discounted stochastic game with a continuum of states, finitely many players and actions, such that although all transitions are absolutely continuous w.r.t. a fixed measure, it possesses no stationary equilibria. This absolute continuity condition has been assumed in many equilibrium existence results, and the game presented here complements a recent example of ours of a game with no stationary equilibria but which possess deterministic transitions. We also show that if one allows for compact action spaces, even games with state-independent transitions need not possess stationary equilibria.
We present an example of a discounted stochastic game with a continuum of states, finitely many players and actions, and deterministic transitions, that possesses no measurable stationary equilibria, or even stationary approximate equilibria. The example is robust to perturbations of the payoffs, the transitions, and the discount factor, and hence gives a strong nonexistence result for stationary equilibria. The example is a game of perfect information, and hence it also does not possess stationary extensive-form correlated equilibrium. Markovian equilibria are also shown not to exist in appropriate perturbations of our example.
This paper studies theoretically the aggregate distribution of revealed preferences when heterogeneous individuals make the trade o? between being true to their real opinions and conforming to a social norm. We show that in orthodox societies, individuals will tend to either conform fully or ignore the social norm while individuals in liberal societies will tend to compromise between the two extremes. The model sheds light on phenomena such as polarization, alienation and hypocrisy. We also show that societies with orthodox individuals will be liberal on aggregate unless the social norm is upheld by an authority. This suggests that orthodoxy cannot be maintained under pluralism.
In their seminal works, Arrow (1965) and Pratt (1964) defined two aspects of risk aversion: absolute risk aversion and relative risk aversion. Based on their definitions, we define two aspects of risk: absolute risk and relative risk. We consider situations in which, by making an investment, an agent exchanges a certain amount of wealth w by a random distributed level of wealth W. In such situations, we define absolute risk as the riskiness of a gamble that is distributed as W-w, and relative risk as the riskiness of a security that is distributed as W/w. We measure absolute risk by the Aumann and Serrano (2008) index of riskiness and relative risk by an equivalent index that we develop in this paper. The two concepts of risk do not necessarily agree on which one of two investments is riskier, and hence they capture two different aspects of risk.
Yaakov Kareev Judith Avrahami Amos Schurr, Ilana Ritov. 2012. “Effect of Perspective on Unethical Behavior, The”. Publisher's Version Abstract
In two experiments, we explored how the perspective through which individuals view their decisions influences their moral behavior. To do this we employed a computerized "Is that the answer you had in mind?" trivial-pursuit style game. The game challenges individuals' integrity because cheating during play cannot be detected. Perspective, whether local or global, was manipulated: In Experiment 1 the choice procedure was used to evoke a local or an integrative perspective of one's choices, whereas in Experiment 2, perspective was manipulated through priming. Across all the experiments, we observed that when given an incentive to cheat, the adoption of a local perspective increased cheating, as evidenced by overall higher reported success rates. These findings have clear implications for explaining and controlling behavior in other situations (e.g., exercising, dieting) in which the perspective one takes is a matter of choice.
Simon (2003) presented an example of a 3-player Bayesian games with no Bayesian equilibria but it has been an open question whether or not there are games with no approximate Bayesian equilibria. We present an example of a Bayesian game with two players, two actions and a continuum of states that possesses no approximate Bayesian equilibria, thus resolving the question. As a side benefit we also have for the first time an an example of a 2-player Bayesian game with no Bayesian equilibria and an example of a strategic-form game with no approximate Nash equilibria. The construction makes use of techniques developed in an example by Y. Levy of a discounted stochastic game with no stationary equilibria.
The classical Bomber problem concerns properties of the optimal allocation policy of arsenal for an airplane equipped with a given number, n, of anti-aircraft missiles, at a distance t > 0 from its destination, which is intercepted by enemy planes appearing according to a homogeneous Poisson process. The goal is to maximize the probability of reaching its destination. The Fighter problem deals with a similar situation, but the goal is to shoot down as many enemy planes as possible. The optimal allocation policies are dynamic, depending upon the times at which the enemy is met. The present paper generalizes these problems by allowing the number of enemy planes to have any distribution, not just Poisson. This implies that the optimal strategies can no longer be dynamic, and are, in our terminology, offline. We show that properties similar to those holding for the classical problems hold also in the present case. Whether certain properties hold that remain open questions in the dynamic version are resolved in the offline version. Since `time' is no longer a meaningful way to parametrize the distributions for the number of encounters, other more general orderings of distributions are needed. Numerical comparisons between the dynamic and offliine approaches are given.
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the contraction bias , in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.
Consider the problem of maximizing the revenue from selling a number of goods to a single buyer. We show that, unlike the case of one good, when the buyer's values for the goods increase the seller's maximal revenue may well decrease. We also provide a characterization of revenue-maximizing mechanisms (more generally, of "seller-favorable" mechanisms) that circumvents nondifferentiability issues. Finally, through simple and transparent examples, we clarify the need for and the use of randomization when maximizing revenue in the multiple-goods versus the one-good case.
The classical secretary problem for selecting the best item is studied when the actual values of the items are observed with noise. One of the main appeals of the secretary problem is that the optimal strategy is able to find the best observation with the nontrivial probability of about 0.37, even when the number of observations is arbitrarily large. The results are strikingly different when the quality of the secretaries are observed with noise. If there is no noise, then the only information that is needed is whether an observation is the best among those already observed. Since observations are assumed to be i.i.d. this is distribution free. In the case of noisy data, the results are no longer distrubtion free. Furthermore, one needs to know the rank of the noisy observation among those already seen. Finally, the probability of finding the best secretary often goes to 0 as the number of obsevations, n, goes to infinity. The results depend heavily on the behavior of pn, the probability that the observation that is best among the noisy observations is also best among the noiseless observations. Results involving optimal strategies if all that is available is noisy data are described and examples are given to elucidate the results.
Richard P. Ebstein Salomon Israel, Ori Weisel and Gary Bornstein. 2012. “Oxytocin, but Not Vasopressin, Increases Both Parochial and Universal Altruism”. Publisher's Version Abstract
In today's increasingly interconnected world, deciding with whom and at what level to cooperatebecomes a matter of increasing importance as societies become more globalized and large-scalecooperation becomes a viable means of addressing global issues. This tension can play out viacompetition between local (e.g. within a group) and global (e.g., between groups) interests. Despiteresearch highlighting factors influencing cooperation in such multi-layered situations, theirbiological basis is not well understood. In a double-blind placebo controlled study, we investigatedthe influence of intranasally administered oxytocin and arginine vasopressin on cooperativebehavior at local and global levels. We find that oxytocin causes an increase in both thewillingness to cooperate and the expectation that others will cooperate at both levels. In contrast,participants receiving vasopressin did not differ from those receiving placebo in their cooperativebehavior. Our results highlight the selective role of oxytocin in intergroup cooperative behavior.
We prove that a single-valued solution of perfectly competitive TU economies underling nonatomic vector measure market games is uniquely determined as the Mertens (1988) value by four plausible value-related axioms. Since the Mertens value is always in the core of an economy, this result provides an axiomatization of the Mertens value as a core-selection. Previous works on this matter assumed the economies to be either differentiable (e.g., Dubey and Neyman (1984)) or of uniform finite type (e.g., Haimanko (2002). This work does not assume that, thus it contributes to the axiomatic study of payoffs in perfectly competitive economies in general.
The problem of disagreement asks about the appropriate response (typically the response of a peer) upon encountering a disagreement between peers. The responses proposed in the literature offer different solutions to the problem, each of which has more or less normative appeal. Yet none of these seems to engage with what seems to be the real problem of disagreement. It is my aim in this paper to highlight what I think the real problem of disagreement is. It is, roughly, the problem of deciding whether a revisionary tactic is appropriate following the discovery of disagreement as well as deciding which revisionary tactic is appropriate. This, I will show, is a slippery and inevitable problem that any discussion of disagreement ought to deal with.
Among the single-valued solution concepts studied in cooperative game theory and economics, those which are also positive projections play an important role. The value, semivalues, and quasivalues of a cooperative game are several examples of solution concepts which are positive projections. These solution concepts are known to have many important applications in economics. In many applications the specific positive projection discussed is represented as an expectation of marginal contributions of agents to "random" coalitions. Usually these representations are used to characterize positive projections obeying certain additional axioms. It is thus of interest to study the representation theory of positive projections and its relation with some common axioms. We study positive projections defined over certain spaces of nonatomic Lipschitz vector measure games. To this end, we develop a general notion of "calculus" for such games, which in a manner extends the notion of the Radon-Nykodim derivative for measures. We prove several representation results for positive projections, which essentially state that the image of a game under the action of a positive projection can be represented as an averaging of its derivative w.r.t. some vector measure. We then introduce a specific calculus for the space $mathcalCON$ generated by concave, monotonically nondecreasing, and Lipschitz continuous functions of finitely many nonatomic probability measures. We study in detail the properties of the resulting representations of positive projections on $mathcalCON$ and especially those of values on $mathcalCON$. The latter results are of great importance in various applications in economics.