Publications

2015
Weiss, U. . (2015). Robber Wants To Be Punished, The. Discussion Papers. presented at the 5. Retrieved from /files/dp685.pdf Publisher's VersionAbstract
It is a commonly held intuition that increasing punishment leads to less crime. Let's move our glance from the punishment for the crime itself to the punishment for the attempt to commit a crime, or to the punishment for the threat to carry it out. We'll argue that the greater the punishment for the attempt to rob, i.e. for the threat, "give me your money or else ¦", the greater the number of robberies and attempts there will be. The punishment for the threat makes the withdrawal from it more expensive for the criminal, making the relative cost of committing the crime lower. In other words, the punishment of the attempt turns the attempt into a commitment by the robber, while at the same time turning an incredible threat into a credible one. Therefore, the robber has a strong interest in a legal system that increases the punishment of the attempt.
Omer Edhan, Ziv Hellman, D. S. - R. . (2015). Sex And Portfolio Investment. Discussion Papers. presented at the 4. Retrieved from /files/dp683.pdf Publisher's VersionAbstract
We attempt to answer why sex is nearly ubiquitous when asexual reproduction is ostensibly more efficient than sexual reproduction. From the perspective of a genetic allele, each individual bearing that allele is akin to a stock share yielding dividends equal to that individual's number of offspring, and the totality of individuals bearing the allele is its portfolio investment. Alleles compete over portfolio growth, and evolutionary reproduction strategies are essentially on-line learning algorithms seeking improved portfolio growth, with sexual reproduction a goal-directed algorithmic exploration of genotype space by sampling in each generation. The model assumes a stochastically changing environment but not weak selection. We show that in finite population models the algorithm of sexual reproduction yields, with high probability, higher expected growth than the algorithm of asexual reproduction does, proposing this as an explanation to why a majority of species reproduce sexually.
Dean P. Foster, S. H. . (2015). Smooth Calibration, Leaky Forecasts, and Finite Recall. Discussion Papers. presented at the 9. Retrieved from Publisher's VersionAbstract
We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We also consider related problems: online linear regression, weak calibration, and uncoupled Nash dynamics in n-person games.
Amir, N. . (2015). Uniqueness of Optimal Strategies in Captain Lotto Games. Discussion Papers. presented at the 6. Retrieved from /files/dp687.pdf Publisher's VersionAbstract
We consider the class of two-person zero-sum allocation games known as Captain Lotto games (Hart 2014). These are Colonel Blotto type games in which the players have capacity constraints. We show that the players optimal strategies are unique in most cases.
2014
Maya Bar-Hillel, Eyal Peer, A. A. . (2014). "Heads or Tails?" - A Reachability Bias in Binary Choice. Discussion Papers. presented at the 1, Journal of Experimental Psychology: Learning, Memory, and Cognition, Apr 28 , 2014,. Retrieved from /files/dp657.pdf Publisher's VersionAbstract
When asked to mentally simulate coin tosses, people generate sequences which differ systematically from those generated by fair coins. It has been rarely noted that this divergence is apparent already in the very first mental toss. Analysis of several existing data sets reveals that about 80% of respondents start their sequence with Heads. We attributed this to the linguistic convention describing coin toss outcomes as "Heads or Tails", not vice versa. However, our subsequent experiments found the "first-toss" bias reversible under minor changes in the experimental setup, such as mentioning Tails before Heads in the instructions. We offer a comprehensive account in terms of a novel response bias, which we call reachability. It is more general than the first-toss bias, and reflects the relative ease of reaching one option compared to its alternative in any binary choice context. When faced with a choice between two options (e.g., Heads and Tails, when "tossing" mental coins), whichever of the two is presented first by the choice architecture (hence, is more reachable) will be favored. This bias has far-reaching implications extending well beyond the context of randomness cognition, and in particular to binary surveys (e.g., accept vs. reject) and tests (e.g., True-False). In binary choice, there is an advantage to what presents first. Keywords: acquiescence bias; order effects; randomness cognition; reachability; response bias
Yannai A. Gonczarowski, N. N. . (2014). A Stable Marriage Requires Communication. Discussion Papers. presented at the 6. Retrieved from /files/dp667.pdf Publisher's VersionAbstract
The Gale-Shapely algorithm for the Stable Marriage Problem is known to take Theta(n^2) steps to find a stable marriage in the worst case, but only Theta(n log n) steps in the average case (with n women and n men). In 1976, Knuth asked whether the worst-case running time can be improved in a model of computation that does not require sequential access to the whole input. A partial negative answer was given by Ng and Hirschberg, who showed that Theta(n^2) queries are required in a model that allows certain natural random-access queries to the participants' preferences. Using a reduction to the communication complexity of the disjointness problem, we prove a significantly more general - albeit slightly weaker - result, showing that Omega(n^2) Boolean queries of any type are required. Our lower bound generalizes to (A) randomized algorithms, (B) even just verifying the stability of a proposed marriage, (C) even allowing arbitrary separate preprocessing of the women's preferences and of the men's preferences, and (D) several variants of the basic problem, such as whether a given pair is married in every/some stable marriage.
Todd R. Kaplan, S. Z. . (2014). Advances in Auctions. Discussion Papers. presented at the 3, This Paper Is a Chapter in the Forthcoming Handbook of Game Theory, Volume 4, Edited by Peyton Young and Shmuel Zamir, Elsevier (2014). Retrieved from /files/dp662.pdf Publisher's VersionAbstract
As a selling mechanism, auctions have acquired a central position in the free market econ-omy all over the globe. This development has deepened, broadened, and expanded the theory of auctions in new directions. This chapter is intended as a selective update of some of the developments and applications of auction theory in the two decades since Wilson (1992) wrote the previous Handbook chapter on this topic.
Hart, S. . (2014). Allocation Games with Caps: From Captain Lotto to All-Pay Auctions. Discussion Papers. presented at the 11. Retrieved from Publisher's VersionAbstract
A Lotto game is a two-person zero-sum game where each player chooses a distribution on nonnegative real numbers with given expectation, so as to maximize the probability that his realized choice is higher than his opponent's. These games arise in various competitive allocation setups (e.g., contests, research and development races, political campaigns, Colonel Blotto games). A Captain Lotto game is a Lotto game with caps, which are upper bounds on the numbers that may be chosen. First, we solve the Captain Lotto games. Second, we show how to reduce all-pay auctions to simpler games expenditure games using the solution of the corresponding Lotto games. As a particular application we solve all-pay auctions with unequal caps, which yield a significant increase in the seller's revenue (or, the players' efforts).
Gali Noti, Noam Nisan, I. Y. . (2014). An Experimental Evaluation of Bidders' Behavior in Ad Auctions. Discussion Papers. presented at the 12, WWW '14 Proceedings of the 23rd International Conference on World Wide Web, Pages 619-630. Retrieved from /files/dp676.pdf Publisher's VersionAbstract
We performed controlled experiments of human participants in a continuous sequence of ad auctions, similar to those used by Internet companies. The goal of the research was to understand users' strategies in making bids. We studied the behavior under two auction types: (1) the Generalized Second-Price (GSP) auction and (2) the Vickrey–Clarke–Groves (VCG) payment rule, and manipulated also the participants' knowledge conditions: (1) explicitly given valuations and (2) payoff information from which valuations could be deduced. We found several interesting behaviors, among them are: - No convergence to equilibrium was detected; moreover the frequency with which participants modified their bids increased with time. - We can detect explicit "better-response" behavior rather than just mixed bidding. - While bidders in GSP auctions do strategically shade their bids, they tend to bid higher than theoretically predicted by the standard VCG-like equilibrium of GSP. - Bidders who are not explicitly given their valuations but can only deduce them from their gains behave a little less "precisely" than those with such explicit knowledge, but mostly during an initial learning phase. - VCG and GSP yield approximately the same (high) social welfare, but GSP tends to give higher revenue.
Yannai A. Gonczarowski, M. T. . (2014). Cascading to Equilibrium: Hydraulic Computation of Equilibria in Resource Selection Games. Discussion Papers. presented at the 12. Retrieved from /files/dp673.pdf Publisher's VersionAbstract
Drawing intuition from a (physical) hydraulic system, we present a novel framework, constructively showing the existence of a strong Nash equilibrium in resource selection games with nonatomic players, the coincidence of strong equilibria and Nash equilibria in such games, and the invariance of the cost of each given resource across all Nash equilibria. Our proofs allow for explicit calculation of Nash equilibrium and for explicit and direct calculation of the resulting (invariant) costs of resources, and do not hinge on any fixed-point theorem, on the Minimax theorem or any equivalent result, on the existence of a potential, or on linear programming. A generalization of resource selection games, called resource selection games with I.D.-dependent weighting, is defined, and the results are extended to this family, showing that while resource costs are no longer invariant across Nash equilibria in games of this family, they are nonetheless invariant across all strong Nash equilibria, drawing a novel fundamental connection between group deviation and I.D.-congestion. A natural application of the resulting machinery to a large class of constraint-satisfaction problems is also described.
Bezalel Peleg, H. P. . (2014). Choosing K from M: Feasible Elimination Procedures Reconsidered. Discussion Papers. presented at the 12. Retrieved from /files/dp671.pdf Publisher's VersionAbstract
We show that feasible elimination procedures (Peleg, 1978) can be used to select k from m alternatives. An important advantage of this method is the core property: no coalition can guarantee an outcome that is preferred by all its members. We also provide an axiomatic characterization for the case k = 1, using the conditions of anonymity, Maskin monotonicity, and independent blocking. Finally, we show for any k that outcomes of feasible elimination procedures can be computed in polynomial time, by showing that the problem is computationally equivalent to finding a maximal matching in a bipartite graph.
Ofri Raviv, Itay Lieder, Y. L. M. A. . (2014). Contradictory Behavioral Biases Result from the Influence of Past Stimuli on Perception. Discussion Papers. presented at the 12. Retrieved from /files/dp672.pdf Publisher's VersionAbstract
Biases such as the preference of a particular response for no obvious reason, are an integral part of psychophysics. Such biases have been reported in the common two-alternative forced choice (2AFC) experiments, where participants are instructed to compare two consecutively presented stimuli. However, the principles underlying these biases are largely unknown and previous studies have typically used ad-hoc explanations to account for them. Here we consider human performance in the 2AFC tone frequency discrimination task, utilizing two standard protocols. In both protocols, each trial contains a reference stimulus. In one (Reference-Lower protocol), the frequency of the reference stimulus is always lower than that of the comparison stimulus whereas in the other (Reference protocol), the frequency of the reference stimulus is either lower or higher than that of the comparison stimulus. We find substantial interval biases. Namely, participants perform better when the reference is in a specific interval. Surprisingly, the biases in the two experiments are opposite: performance is better when the reference is in the first interval in the Reference protocol, but is better when the reference is second in the Reference-Lower protocol. This inconsistency refutes previous accounts of the interval bias, and is resolved when experiments statistics is considered. Viewing perception as incorporation of sensory input with prior knowledge accumulated during the experiment accounts for the seemingly contradictory biases both qualitatively and quantitatively. The success of this account implies that even simple discriminations reflect a combination of sensory limitations, memory limitations, and the ability to utilize stimuli statistics.
Itai Arieliy, Y. (J. ) L. . (2014). Determinacy of Games with Stochastic Eventual Perfect Monitoring. Discussion Papers. presented at the 1. Retrieved from /files/dp658.pdf Publisher's VersionAbstract
We consider an infinite two-player stochastic zero-sum game with a Borel winning set, in which the opponent's actions are monitored via stochastic private signals. We introduce two conditions of the signalling structure: Stochastic Eventual Perfect Monitoring (SEPM) and Weak Stochastic Eventual Perfect Monitoring (WSEPM). When signals are deterministic these two conditions coincide and by a recent result due to [Shmaya (2011)] entail determinacy of the game. We generalize [Shmaya (2011)]'s result and show that in the stochastic learning environment SEPM implies determinacy while WSEPM does not.
Sergiu Hart, N. N. . (2014). How Good Are Simple Mechanisms for Selling Multiple Goods?. Discussion Papers. presented at the 5. Retrieved from Publisher's VersionAbstract
Maximizing the revenue from selling two goods (or items) is a notoriously difficult problem, in stark contrast to the single-good case. We show that simple "one-dimensional" mechanisms, such as selling the two goods separately, guarantee at least 73% of the optimal revenue when the valuations of the two goods are independent and identically distributed, and at least 50% when they are independent. However, in the general case where the valuations may be correlated, simple mechanisms cannot guarantee any positive fraction of the optimal revenue. We also introduce a "measure of complexity" for mechanisms–-the menu size–-and show that it is naturally related to the fraction of the optimal revenue that can be guaranteed.
Einav Hart, Judith Avrahami, Y. K. P. M. T. . (2014). Investing Even in Uneven Contests: Effects of Asymmetry on Investment in Experimental All-Pay Contests. Discussion Papers. presented at the 2. Retrieved from /files/db660.pdf Publisher's VersionAbstract
Many competitions require investment of nonrefundable resources, e.g., political campaigns, financial markets, sports or courting rituals. One contestant wins the prize for the invested amount, while all others forfeit their investments without receiving compensation. Frequently, contests are asymmetric, due to differing resources or prize valuations. This could lead weaker contestants to avoid investing, and stronger ones to lower their investment. Two experiments explored the effects of asymmetry between the contestants "arising from their endowments or prizes "on investments. Subjects played both symmetric and asymmetric contests, enabling direct within-subject comparisons. We observed an effect of asymmetry only when it concerned endowments: Subjects invested less when their endowments were asymmetric, whereas (a-)symmetry in the prizes did not influence investments. The changes between consecutive investments can be explained by reactions to the previous outcome (win or loss) in terms of regret over the previous investment being too much or too little.
Gianluigi Mongillo, Hanan Shteingart, Y. L. . (2014). Misbehavior of Reinforcement Learning, The. Discussion Papers. presented at the 3, Forthcoming in Proc. IEEE. Retrieved from /files/db661.pdf Publisher's VersionAbstract
Organisms modify their behavior in response to its consequences, a phenomenon referred to as operant learning. The computational principles and neural mechanisms underlying operant learning are a subject of extensive experimental and theoretical investigations. Theoretical approaches largely rely on concepts and algorithms from Reinforcement Learning. The dominant view is that organisms maintain a value function, that is a set of estimates of the cumulative future rewards associated with the different behavioral options. These values are then used to select actions. Learning in this framework results from the update of these values depending on experience of the consequences of past actions. An alternative view questions the applicability of such a computational scheme to many real-life situations. Instead, it posits that organisms exploit the intrinsic variability in their action selection mechanism(s) to modify their behavior, e.g., via stochastic gradient ascent, without the need of an explicit representation of values. In this review, we compare these two approaches in terms of their computational power and flexibility, their putative neural correlates and, finally, in terms of their ability to account for behavior as observed in repeated-choice experiments. We discuss the successes and failures of these alternative approaches in explaining the observed patterns of choice behavior. We conclude by identifying some of the important challenges to a comprehensive theory of operant learning.
O'Neill, B. . (2014). Networks of Rights in Conflict: A Talmudic Example. Discussion Papers. presented at the 12. Retrieved from /files/db677.pdf Publisher's VersionAbstract
Many disputes involve conflicts of rights. A common view is that rights cannot really be in conflict so one of those being claimed must be a mistake. This idea leads to extreme outcomes that cut some parties out. Many studies have investigated how to choose a compromise among rights but they have focus on situations where the incompatibility comes from the degrees of the claims, as when, for example, a deceased person promised his heirs more than his total estate. I analyze a Talmudic problem where the difficulty is the pattern of the rights - each one trumps another in a cycle. The theory of non-transferable utility coalitional games suggests two solutions, one based on Shapley's and Maschler-Owen's values, which are equivalent for the problem, and the other on Harsanyi's and Kalai-Samet's, also equivalent. Each satisfies four out of five desirable properties, better than several other solutions. The NTU games are appropriate not just for power-based negotiation but for disputes over justice, fairness and rights. It is hoped that this analysis will form part of a general understanding of rights conflicts.
Yannai A. Gonczarowski, M. T. . (2014). Noncooperative Market Allocation and the Formation of Downtown. Discussion Papers. presented at the 3. Retrieved from /files/db663.pdf Publisher's VersionAbstract
Can noncooperative behaviour of merchants lead to a market allocation that prima facie seems anticompetitive? We introduce a model in which service providers aim at optimizing the number of customers who use their services, while customers aim at choosing service providers with minimal customer load. Each service provider chooses between a variety of levels of service, and as long as it does not lose customers, aims at minimizing its level of service; the minimum level of service required to satisfy a customer varies across customers. We consider a two-stage competition, in the first stage of which the service providers select their levels of service, and in the second stage –- customers choose between the service providers. (We show via a novel construction that for any choice of strategies for the service providers, a unique distribution of the customers' mass between them emerges from all Nash equilibria among the customers, showing the incentives of service providers in the two-stage game to be well defined.) In the two-stage game, we show that the competition among the service providers possesses a unique Nash equilibrium, which is moreover super strong; we also show that all sequential better-response dynamics of service providers reach this equilibrium, with best-response dynamics doing so surprisingly fast. If service providers choose their levels of service according to this equilibrium, then the unique Nash equilibrium among customers in the second phase is essentially an allocation (i.e. split) of the market between the service providers, based on the customers' minimum acceptable quality of service; moreover, each service provider's chosen level of service is the lowest acceptable by the entirety of the market share allocated to it. Our results show that this seemingly-cooperative allocation of the market arises as the unique and highly-robust outcome of noncooperative (i.e. free from any form of collusion), even myopic, service-provider behaviour. The results of this paper are applicable to a variety of scenarios, such as the competition among ISPs, and shed a surprising light on aspects of location theory, such as the formation and structure of a city's central business district.
David Azriel, Y. R. . (2014). On Measuring and Comparing Usefulness of Statistical Models. Discussion Papers. presented at the 10. Retrieved from /files/db669.pdf Publisher's VersionAbstract
Statistical models in econometrics, biology, and most other areas, are not expected to be correct, and often are not very accurate. The choice of a model for the analysis of data depends on the purpose of the analysis, the relation between the data and the model, and also on the sample or data size. Combining ideas from Erev, Roth, Slonim, and Barron (2007) and the well-known AIC criterion and cross-validation, we propose a variant of model selection approach as a function of the models and the data size, with quantification of the chosen model's relative value. Our research is motivated by data from experimental economics, and we also give a simple biological example.
Irit Nowik, S. Z. . (2014). On the Risk in Deviating from Nash Equilibrium. Discussion Papers. presented at the 4. Retrieved from /files/dp664.pdf Publisher's VersionAbstract
The purpose of this work is to offer for any zero-sum game with a unique strictly mixed Nash equilibrium, a measure for the risk when deviating from the Nash equilibrium. We present two approaches regarding the nature of deviations; strategic and erroneous. Accordingly, we define two models. In each model we define risk measures for the row-player (PI) and the column player (PII), and prove that the risks of PI and PII coincide. This result holds for any norm we use for the size of deviations. We develop explicit expressions for the risk measures in the L1 and L2 norms, and compute it for several games. Although the results hold for all norms, we show that only the L1 norm is suitable in our context, as it is the only norm which is consistent in the sense that it gives the same size to potentially equivalent deviations. The risk measures defined here enables testing and evaluating predictions on the behavior of players. For example: Do players deviate more in a game with lower risks than in a game with higher risk?