2010
Lev, O. . (2010).
A Two-Dimensional Problem of Revenue Maximization.
Discussion Papers. presented at the 4. Retrieved from
/files/dp542.pdf Publisher's VersionAbstractWe consider the problem of finding the mechanism that maximizes the revenue of a seller of multiple objects. This problem turns out to be significantly more complex than the case where there is only a single object (which was solved by Myerson [5]). The analysis is difficult even in the simplest case studied here, where there are two exclusive objects and a single buyer, with valuations uniformly distributed on triangular domains. We show that the optimal mechanisms are piecewise linear with either 2 or 3 pieces, and obtain explicit formulas for most cases of interest
Hellman, Z. . (2010).
Almost Common Priors.
Discussion Papers. presented at the 9. Retrieved from
/files/dp560R.pdf Publisher's VersionAbstract{What happens when priors are not common? We show that for each type pro¬le „ over a knowledge space (\copyright, ), where the state space \copyright is connected with respect to the partition pro¬le , we can associate a value 0 / 1 that we term the prior distance of „
Arieli, I. . (2010).
Backward Induction and Common Strong Belief of Rationality.
Discussion Papers. presented at the 2. Retrieved from
/files/dp535.pdf Publisher's VersionAbstractIn 1995, Aumann showed that in games of perfect information, common knowledge of rationality is consistent and entails the back- ward induction (BI) outcome. That work has been criticized because it uses "counterfactual" reasoning|what a player "would" do if he reached a node that he knows he will not reach, indeed that he him- self has excluded by one of his own previous moves. This paper derives an epistemological characterization of BI that is outwardly reminiscent of Aumann's, but avoids counterfactual reason- ing. Specifically, we say that a player strongly believes a proposition at a node of the game tree if he believes the proposition unless it is logically inconsistent with that node having been reached. We then show that common strong belief of rationality is consistent and entails the BI outcome, where - as with knowledge - the word "common" signifies strong belief, strong belief of strong belief, and so on ad infinitum. Our result is related to - though not easily derivable from - one obtained by Battigalli and Sinischalchi [7]. Their proof is, however, much deeper; it uses a full-blown semantic model of probabilities, and belief is defined as attribution of probability 1. However, we work with a syntactic model, defining belief directly by a sound and complete set of axioms, and the proof is relatively direct.
Noga Alon, Yuval Emek, M. F., & Tennenholtz, M. . (2010).
Bayesian Ignorance.
Discussion Papers. presented at the 2. Retrieved from
/files/dp538.pdf Publisher's VersionAbstractWe quantify the effect of Bayesian ignorance by comparing the social cost obtained in a Bayesian game by agents with local views to the expected social cost of agents having global views. Both benevolent agents, whose goal is to minimize the social cost, and selfish agents, aiming at minimizing their own individual costs, are considered. When dealing with selfish agents, we consider both best and worst equilibria outcomes. While our model is general, most of our results concern the setting of network cost sharing (NCS) games. We provide tight asymptotic results on the effect of Bayesian ignorance in directed and undirected NCS games with benevolent and selfish agents. Among our findings we expose the counter-intuitive phenomenon that "ignorance is bliss": Bayesian ignorance may substantially improve the social cost of selfish agents. We also prove that public random bits can replace the knowledge of the common prior in attempt to bound the effect of Bayesian ignorance in settings with benevolent agents. Together, our work initiates the study of the effects of local vs. global views on the social cost of agents in Bayesian contexts.
Rinott, Y. M., & Yosef, . (2010).
Best Invariant and Minimax Estimation of Quantiles in Finite Populations.
Discussion Papers. presented at the 5, Journal of Statistical Planning and Inference 141, 2633–2644 (2011). Retrieved from
/files/dp553.pdf Publisher's VersionAbstractWe study estimation of finite population quantiles, with emphasis on estimators that are invariant under monotone transformations of the data, and suitable invariant loss functions. We discuss non-randomized and randomized estimators, best invariant and minimax estimators and sampling strategies relative to different classes. The combination of natural invariance of the kind discussed here, and finite population sampling appears to be novel, and leads to interesting statistical and combinatorial aspects.
Hart, S. . (2010).
Comparing Risks by Acceptance and Rejection.
Discussion Papers. presented at the 2. Retrieved from
Publisher's VersionAbstractStochastic dominance is a partial order on risky assets ("gambles") that is based on the uniform preference, of all decision-makers (in an appropriate class), for one gamble over another. We modify this, first, by taking into account the status quo (given by the current wealth) and the possibility of rejecting gambles, and second, by comparing rejections that are substantive (that is, uniform over wealth levels or over utilities). This yields two new stochastic orders: wealth-uniform dominance and utility-uniform dominance. Unlike stochastic dominance, these two orders are complete: any two gambles can be compared. Moreover, they are equivalent to the orders induced by, respectively, the Aumann-Serrano (2008) index of riskiness and the Foster-Hart (2009a) measure of riskiness.
Babichenko, Y. . (2010).
Completely Uncoupled Dynamics and Nash Equilibria.
Discussion Papers. presented at the 1. Retrieved from
/files/dp529.pdf Publisher's VersionAbstractA completely uncoupled dynamic is a repeated play of a game, where each period every player knows only his action set and the history of his own past actions and payoffs. One main result is that there exist no completely uncoupled dynamics with finite memory that lead to pure Nash equilibria (PNE) in almost all games possessing pure Nash equilibria. By "leading to PNE" we mean that the frequency of time periods at which some PNE is played converges to 1 almost surely. Another main result is that this is not the case when PNE is replaced by "Nash epsilon-equilibria": we exhibit a completely uncoupled dynamic with finite memory such that from some time on a Nash epsion-equilibrium is played almost surely.
Kareev, J. A., & Yaakov, . (2010).
Detecting Change In Partner's Preferences.
Discussion Papers. presented at the 7. Retrieved from
/files/dp557.pdf Publisher's VersionAbstractStudies of the detection of change have commonly been concerned with individuals inspecting a system or a process, whose characteristics were fully determined by the researcher. We, instead, study the detection of change in the preferences - and hence the behavior - of others with whom an individual interacts. More specifically, we study situations in which one's benefits are the result of the joint actions of one and one's partner when at times the preferred combination is the same for both and at times it is not. In other words, what we change is the payoffs associated with the different combinations of interactive choices and then look at choice behavior following such a change. We find that players are extremely quick to respond to a change in the preferences of their counterparts. This responsiveness can be explained by the players' impulsive reaction to regret - if one was due - at their most recent decision.
Linial1, A. B., & Nati, . (2010).
Dynamics of Reputation Systems, The.
Discussion Papers. presented at the 11. Retrieved from
/files/dp563.pdf Publisher's VersionAbstractOnline reputation systems collect, maintain and disseminate reputations as a summary numerical score of past interactions of an establishmentwith its users. As reputation systems, including web search engines, gain inpopularity and become a common method for people to select sought services, adynamical system unfolds: Experts' reputation attracts the potential customers.The experts' expertise affects the probability of satisfying the customers. Thisrate of success in turn influences the experts' reputation. We consider hereseveral models where each expert has innate, constant, but unknown level ofexpertise and a publicly known, dynamically varying, reputation.The specific
Edith Cohen, Michal Feldman, A. F. H. K., & Olonetsky, S. . (2010).
Envy-Free Makespan Approximation.
Discussion Papers. presented at the 2. Retrieved from
/files/dp539.pdf Publisher's VersionAbstractWe study envy-free mechanisms for scheduling tasks on unrelated machines (agents) that approximately minimize the makespan. For indivisible tasks, we put forward an envy-free poly-time mechanism that approximates the minimal makespan to within a factor of O(logm), where m is the number of machines. We also show a lower bound of Omega(log m/log logm). This improves the recent result of Mu'alem [22] who give an upper bound of (m + 1)/2, and a lower bound of 2 - 1/m. For divisible tasks, we show that there always exists an envy-free poly-time mechanism with optimal makespan. Finally, we demonstrate how our mechanism for envy free makespan minimization can be interpreted as a market clearing problem.
Samuel-Cahn, J. B., & Ester, . (2010).
Fighter Problem: Optimal Allocation of a Discrete Commodity, The.
Discussion Papers. presented at the 7, Advances in Applied Probability, (2011), Vol. 43, 121-130. Retrieved from
/files/dp558.pdf Publisher's VersionAbstractThe Fighter problem with discrete ammunition is studied. An aircraft (fighter) equipped with n anti-aircraft missiles is intercepted by enemy airplanes, the appearance of which follows a homogeneous Poisson process with known intensity. If j of the n missiles are spent at an encounter they destroy an enemy plane with probability a(j), where a(0)=0 and a(j) is a known, strictly increasing concave sequence, e.g., a(j)=1 - qj, 0 < 1. If the enemy is not destroyed, the enemy shoots the fighter down with known probability 1 - u, where 0 u 1. The goal of the fighter is to shoot down as many enemy airplanes as possible during a given time period [0,T ]. Let K(n, t) be an optimal number of missiles to be used at a present encounter, when the fighter has flying time t remaining and n missiles remaining. Three seemingly obvious properties of K(n, t) have been conjectured: [A] The closer to the destination, the more of the n missiles one should use, [B] the more missiles one has, the more one should use, and [C] the more missiles one has, the more one should save for possible future encounters. We show that [C] holds for all 0 u 1, that [A] and [B] hold for the "Invincible Fighter" (u = 1), and that [A] holds but [B] fails for the "Frail Fighter" (u = 0).
Samet, Z. H., & Dov, . (2010).
How Common Are Common Priors?.
Discussion Papers. presented at the 2, Forthcoming in Games and Economic Behavior. Retrieved from
/files/dp532.pdf Publisher's VersionAbstractTo answer the question in the title we vary agents' beliefs against the background of a fixed knowledge space, that is, a state space with a partition for each agent. Beliefs are the posterior probabilities of agents, which we call type profiles. We then ask what is the topological size of the set of consistent type profiles, those that are derived from a common prior (or a common improper prior in the case of an infinite state space). The answer depends on what we term the tightness of the partition profile. A partition profile is tight if in some state it is common knowledge that any increase of any single agent's knowledge results in an increase in common knowledge. We show that for partition profiles which are tight the set of consistent type profiles is topologically large, while for partition profiles which are not tight this set is topologically small.
Babichenko, Y. . (2010).
How Long to Pareto Efficiency?.
Discussion Papers. presented at the 10. Retrieved from
/files/dp562.pdf Publisher's VersionAbstractWe consider uncoupled dynamics (i.e., dynamics where each player knows only his own payoff function) that reach Pareto efficient and individually rational outcomes. We prove that the number of periods it takes is in the worst case exponential in the number of players.
Peretz, R. . (2010).
Learning Cycle Length Through Finite Automata.
Discussion Papers. presented at the 4. Retrieved from
/files/db546.pdf Publisher's VersionAbstractWe study the space-and-time automaton-complexity of the CYCLE-LENGTH problem. The input is a periodic stream of bits whose cycle length is bounded by a known number n. The output, a number between 1 and n, is the exact cycle length. We also study a related problem, CYCLE-DIVISOR. In the latter problem the output is a large number that divides the cycle length, that is, a number k >> 1 that divides the cycle length, or (in case the cycle length is small) the cycle length itself. The complexity is measured in terms of the SPACE, the logarithm of the number of states in an automaton that solves the problem, and the TIME required to reach a terminal state. We analyze the worst input against a deterministic (pure) automaton, and against a probabilistic (mixed) automaton. In the probabilistic case we require that the probability of computing a correct output is arbitrarily close to one.We establish the following results: o CYCLE-DIVISOR can be solved in deterministic SPACE o(n), and TIME O(n). o CYCLE-LENGTH cannot be solved in deterministic SPACE X TIME smaller than (n^2). o CYCLE-LENGTH can be solved in probabilistic SPACE o(n), and TIME O(n). o CYCLE-LENGTH can be solved in deterministic SPACE O(nL), and TIME O(n/L), for any positive L < 1.
Halbersberg, Y. . (2010).
Liability Standards for Multiple-Victim Torts: A Call for a New Paradigm.
Discussion Papers. presented at the 2. Retrieved from
/files/db533.pdf Publisher's VersionAbstractUnder the conventional approach in torts, liability for an accident is decided by comparing the injurer's costs of precautions with those of the victim, and, under the negligence rule, also with the expected magnitude of harm. In multiplevictim cases, the current paradigm holds that courts should determine liability by comparing the injurer's costs of precautions with the victims' aggregate costs and with their aggregate harm. This aggregative risk-utility test supposedly results in the imposition of liability on the least-cost avoiders of the accident, and, therefore, is assumed efficient. However, this paradigm neglects the importance of the normal differences between tort victims. When victims are heterogeneous with regard to their expected harm or costs of precaution, basing the liability-decision on the aggregate amounts may be incorrect, causing in some cases over-deterrence, while in other, under-deterrence and dilution of liability. A new paradigm is therefore needed. This Article demonstrates how aggregate liability may violate aggregate efficiency, and concludes that decisions based upon aggregate amounts are inappropriate when the victims are heterogeneous-as they typically are in real life. The Article then turns to an exploration of an alternative to the aggregative risk-utility test, and argues for a legal rule that would combine restitution for precaution costs, plus an added small "bonus," with the sampling of victims' claims.
Sheshinski, E. . (2010).
Limits on Individual Choice.
Discussion Papers. presented at the 6. Retrieved from
/files/db554.pdf Publisher's VersionAbstractIndividuals behave with choice probabilities defined by a multinomial logit (MNL) probability distribution over a finite number of alternatives which includes utilities as parameters. The salient feature of the model is that probabilities depend on the choice-set, or domain. Expanding the choice-set decreases the probabilities of alternatives included in the original set, providing positive probabilities to the added alternatives. The wider probability 'spread' causes some individuals to fur- ther deviate from their higher valued alternatives, while others find the added alternatives highly valuable. For a population with diverse preferences, there ex- ists a subset of alternatives, called the optimum choice-set, which balances these considerations to maximize social welfare. The paper analyses the dependence of the optimum choice-set on a parameter which specifies the precision of individuals' choice ('degree of rationality'). It is proved that for high values of this parame- ter the optimum choice-set includes all alternatives, while for low values it is a singleton. Numerical examples demonstrate that for intermediate values, the size and possible nesting of the optimum choice-sets is complex. Governments have various means (defaults, tax/subsidy) to directly a''''ect choice probabilities. This is modelled by 'probability weight'parameters. The paper analyses the structure of the optimum weights, focusing on the possible exclusion of alternatives. A binary example explores the level of 'type one'and 'type two'errors which justify the imposition of early eligibility for retirement benefits, common to social security systems. Finally, the e''''ects of heterogeneous degrees of rationality among individuals are briefly discussed.
Bar-Hillel, M. . (2010).
Maya Bar-Hillel.
Discussion Papers. presented at the 5, Odyssey 8 (2010). Retrieved from
/files/db548.pdf Publisher's VersionAbstractScientists try to find out the truth about our world. Judges in a court of law try to find out the truth about the target events in the indictment. What are the similarities, and what are the differences, in the procedures that govern the search for truth in these two systems? In particular, why are quantitative tools the hallmark of science, whereas in courts they are rarely used, and when used, are prone to error? (In Hebrew)
Harel, M. S., & Alon, . (2010).
Non-Consequentialist Voting.
Discussion Papers. presented at the 4. Retrieved from
/files/db545.pdf Publisher's VersionAbstractStandard theory assumes that voters' preferences over actions (voting) are induced by their preferences over electoral outcomes (policies, candidates). But voters may also have non-consequentialist (NC) motivations: they may care about how they vote even if it does not a''''ect the outcome. When the likelihood of being pivotal is small, NC motivations can dominate voting behavior. To examine the prevalence of NC motivations, we design an experiment that exogenously varies the probability of being pivotal yet holds constant other features of the decision environment. We find a significant e''''ect, consistent with at least 12.5% of subjects being motivated by NC concerns.
Jay Bartroff, Larry Goldstein, Y. R., & Samuel-Cahn, E. . (2010).
On Optimal Allocation of a Continuous Resource Using an Iterative Approach and Total Positivity.
Discussion Papers. presented at the 1, Advances in Applied Probability, (2010) Vol. 42, Pages 795-815. Retrieved from
/files/dp530.pdf Publisher's VersionAbstractWe study a class of optimal allocation problems, including the well-known Bomber Problem, with the following common probabilistic structure. An aircraft equipped with an amount x of ammunition is intercepted by enemy airplanes arriving according to a homogenous Poisson process over a fixed time duration t. Upon encountering an enemy, the aircraft has the choice of spending any amount 0
Halbersberg, Y. . (2010).
On the Deduction of National Insurance Payments from Tort Victims' Claims.
Discussion Papers. presented at the 11, 3 Mishpatim Online 1 (2010). Retrieved from
/files/dp564.pdf Publisher's VersionAbstractIn CA 1093/07 Bachar v. Fokmann [2009] (request for additional hearing denied, 2010) , the Israeli Supreme Court has formed a formula for calculating the deduction of NII payments from a tort victim's claim, when only some of the victim's impairment is causally linked to the tortious act in question. Overall, six Supreme Court Justices have reviewed and affirmed this simple formula. However, this formula is incorrect, as it contradicts some of the most basic tort premises, ignores the way impairment is calculated, and necessarily leads to the under-compensation of the victim, and to an unjust enrichment of either the tortfeasor, the National Insurance Institute, or both. This Article, therefore, calls for the adoption of a different formula that is both legally and arithmetically correct.