Optimal Lobbying is the problem a lobbyist or a campaign manager faces in a full-information voting scenario of a multi-issue referendum when trying to influence the result. The Lobby is faced with a profile that specifies for each voter and each issue whether the voter approves or rejects the issue, and seeks to find the smallest set of voters it must influence to change their vote, for a desired outcome to be obtained. This computational problem also describes problems arising in other scenarios of aggregating complex opinions, such as principal-agents incentives scheme in a complex combinatorial problem, and bribery and manipulation in Truth-Functional Judgement Aggregation. We study the computational complexity of Optimal Lobbying when the issues are aggregated using an anonymous monotone function and the family of desired outcomes is an upward-closed family. We analyze this problem with regard to two parameters: the minimal number of supporters needed to pass an issue, and the size of the maximal minterm of the desired set. We show that for the extreme values of the parameters, the problem is tractable, and provide algorithms. On the other hand, we prove intractability of the problem for the non-extremal values, which are common values for the parameters.
We add two results to the theory of consistent voting. Let M be the set of all survivors of some feasible elimination procedure. We prove that i) M can be computed in polynomial time for each profile of preferences and ii) M is characterized by anonymity, non- imposition, Maskin monotonicity, and additive blocking.
For a random permutation on 1,2, ¦,n for fixed n, and for MŠ†1,2, ¦,n, we analyse the distribution of the combined length L=L(,M) of all cycles of that contain at least one element of M. We give a simple, explicit formula for the probability of every possible value for L (backed by three proofs of distinct flavours), as well as closed-form formulae for its expectation and variance, showing that less than 1/(|M|+1) of the elements 1, ¦,n are expected to be contained in cycles of that are disjoint from M, with low probability for a large deviation from this fraction. We furthermore give a simple explicit formula for all rising-factorial moments of L. These results are applicable to the study of manipulation in matching markets.
We consider a fast evolutionary dynamic process on finite stopping games, where each player at each node has at most one move to continue the game. A state is evolutionarily stable if its long-run relative frequency of occurrence is bounded away from zero as the mutation rate decreases to zero. The fast dynamic process allows each individual in each population to change its strategy at every stage. We define a robustness index of backward induction and show examples where the backward induction equilibrium component is not evolutionarily stable for large populations. We show some sufficient conditions for evolutionary stability, which are different from the ones for the conventional evolutionary model. Even for this fast dynamic process, the transition between any two Nash equilibrium components may take very long time.
We consider a basic dynamic evolutionary model with rare mutation and a best-reply (or better-reply) selection mechanism. A state is evolutionarily stable if its long-term relative frequency of occurrence is bounded away from zero as the mutation rate decreases to zero. We prove that, for all finite extensive-form games of perfect information, only Nash equilibria are evolutionarily stable. We show that, in games where a player may play at more than one node along some path, even when the populations increase to infinity, there may be some evolutionarily stable states which are not part of the backward induction equilibrium component. We give a sufficient condition for evolutionary stability and show how much extra value is needed in the terminal payoffs to make an equilibrium evolutionarily stable.
The usual purpose of negotiations is to explore options and reach an agreement, if possible. We investigated a notable exception to this generalization, where a party negotiates without any intention of reaching an agreement. False negotiation occurs when a party gains more by stalling the negotiations until an external change takes place that improves its position considerably. While false negotiators aim to avoid agreement within the current frame of the negotiations, they also aim to keep the negotiation process alive, since walking away from the negotiation table could endanger their position. We report the results of a study that compared the actions of false and sincere negotiators. The false negotiators used competitive tactics that encumbered the negotiations, yet they concealed their intentions by maintaining a fa\Sade of cooperation. Our theoretical discussion is focused on the balancing act involved in false negotiations and the challenges it poses for actors in social, managerial, and political settings. We conclude our analysis with an example from the realm of international negotiations.
Consider a setting in which agents can each take one of two ordered actions and in which the incentive of any given agent to take the high action is positively reinforced by the number of other agents that take it. Furthermore, assume that we don't know any other details about the game being played. What can we say about the details of the structure of the interaction between actions and incentives when we observe a set or a subset of all possible equilibria? In this paper we study 3 nested classes of games: (a) binary games of strategic complements; (b) games in (a) that admit a network representation: and (c) games in (b) in which the network is complete. Our main results are the following: It has long been established in the literature that the set of pure strategy Nash equilibria of any binary game of strategic complements among a set N of agents can be seen as a lattice on the set of all subsets of N under the partial order defined by the set inclusion relation. If the game happens to be strict in the sense that agents are never indifferent among outcomes (games in (a)), then the resulting lattice of equilibria satisfies a straightforward sparseness condition. (1) We show that, in fact, the games in (a) express all such lattices. (2) We characterize the collection of subsets of N that can be weakly expressed as the set of equilibria of some game of thresholds (games in (b)). (3) We characterize the collection of subsets of N that can be weakly expressed as the set of equilibria of some game of thresholds on the complete graph (games in (c)).
Yoram Moses Armando Casta$\pm$eda, Yannai A. Gonczarowski. 2013. “Good, Better, Best! Unbeatable Protocols for Consensus and Set Consensus”. Publisher's Version Abstract
While the very first consensus protocols for the synchronous model were designed to match the worst-case lower bound, deciding in exactly t+1 rounds in all runs, it was soon realized that they could be strictly improved upon by early stopping protocols. These dominate the first ones, by always deciding in at most t+1 rounds, but often much faster. A protocol is unbeatable if it can't be strictly dominated. Namely, if no protocol Q can decide strictly earlier than P against at least one adversary strategy, while deciding at least as fast as P in all cases. Unbeatability is often a much more suitable notion of optimality for distributed protocols than worst-case performance. Halpern, Moses and Waarts (2001), who introduced this notion, presented a general logic-based transformation of any consensus protocol to an unbeatable protocol that dominates it, and suggested a particular unbeatable consensus protocol. Their analysis is based on a notion of continual common knowledge, which is not easy to work with in practice. Using a more direct knowledge-based analysis, this paper studies unbeatability for both consensus and k-set consensus. We present unbeatable solutions to non-uniform consensus and k-set consensus, and uniform consensus in synchronous message-passing contexts with crash failures. Our consensus protocol strictly dominates the one suggested by Halpern, Moses and Waarts, showing that their protocol is in fact beatable.The k-set consensus problem is much more technically challenging than consensus, and its analysis has triggered the development of the topological approach to distributed computing. Worst-case lower bounds for this problem have required either techniques based on algebraic topology (Guerraoui et al., 2009), or reduction-based proofs (Alistarh et al., 2012; Gafni et al., 2011). Our proof of unbeatability is purely combinatorial, and is a direct, albeit nontrivial, generalization of the one for consensus. We also present an alternative topological unbeatability proof that allows to understand the connection between the connectivity of protocol complexes and the decision time of processes. All of our protocols make use of a notion of a hidden path of nodes relative to a process i at time m, in which a value unknown to i at m may be seen by others. This is a structure that can implicitly be found in lower bound proofs for consensus going back to the '80s (Dolev and Strong, 1982). Its use in our protocols sheds light on the mathematical structure underlying the consensus problem and its variants.For the synchronous model, only solutions to the uniform variant of k-set consensus have been offered. Based on our unbeatable protocols for uniform consensus and for non-uniform k-set consensus, we present a uniform k-set consensus protocol that strictly dominates all known solutions to this problem in the synchronous model.
This paper continues the work initiated in [19]. We adopt the same model as in [19]. We show that the non-backward-induction equilibrium component may be evolutionarily stable for any population size in a finite stopping game where the two equilibrium components are terminated by different players. A surprising result is that the backward induction equilibrium component may not be evolutionarily stable for large populations. Finally, we study the evolutionary stability result in a different limiting process where the expected number of mutations per generation is bounded away from both zero and infinity.
Robert J. Aumann Itai Arieli. 2013. “Logic of Backward Induction, The”. Publisher's Version Abstract
The logic of backward induction (BI) in perfect information (PI) games has been intensely scrutinized for the past quarter century. A major development came in 2002, when P. Battigalli and M. Sinischalchi (BS) showed that an outcome of a PI game is consistent with common strong belief of utility maximization if and only if it is the BI outcome. Both BS's formulation, and their proof, are complex and deep. We show that the result continues to hold when utility maximization is replaced by a rationality condition that is even more compelling; more important, the formulation and proof become far more transparent, accessible, and self-contained.
Gale and Sotomayor (1985) have shown that in the Gale-Shapley matching algorithm (1962), the proposed-to side W (referred to as women there) can strategically force the W-optimal stable matching as the M-optimal one by truncating their preference lists, each woman possibly blacklisting all but one man. As Gusfield and Irving have already noted in 1989, no results are known regarding achieving this feat by means other than such preference-list truncation, i.e. by also permuting preference lists.We answer Gusfield and Irving's open question by providing tight upper bounds on the amount of blacklists and their combined size, that are required by the women to force a given matching as the M-optimal stable matching, or, more generally, as the unique stable matching. Our results show that the coalition of all women can strategically force any matching as the unique stable matching, using preference lists in which at most half of the women have nonempty blacklists, and in which the average blacklist size is less than 1. This allows the women to manipulate the market in a manner that is far more inconspicuous, in a sense, than previously realized. When there are less women than men, we show that in the absence of blacklists for men, the women can force any matching as the unique stable matching without blacklisting anyone, while when there are more women than men, each to-be-unmatched woman may have to blacklist as many as all men. Together, these results shed light on the question of how much, if at all, do given preferences for one side a priori impose limitations on the set of stable matchings under various conditions. All of the results in this paper are constructive, providing efficient algorithms for calculating the desired strategies.
Andreu Mas-Colell Sergiu Hart. 2013. “Markets, Correlation, and Regret-Matching”. Publisher's Version Abstract
Inspired by the existing work on correlated equilibria and regret-based dynamics in games, we carry out a first exploration of the links between the leading equilibrium concept for (exchange) economies, Walrasian equilibrium, and the dynamics, specifically regret-matching dynamics, of trading games that fit the economic structure and have the property that their pure Nash equilibria implement the Walrasian outcomes. Interestingly, in the case of quasilinear utilities (or "transferable utility"), all the concepts essentially coincide, and we get simple deterministic dynamics converging to Walrasian outcomes. Connections to sunspot equilibria are also studied.
We consider the menu size of auctions as a measure of auction complexity and study how it affects revenue. Our setting has a single revenue-maximizing seller selling two or more heterogenous items to a single buyer whose private values for the items are drawn from a (possibly correlated) known distribution, and whose valuation is additive over the items. We show that the revenue may increase arbitrarily with menu size and that a bounded menu size can not ensure any positive fraction of the optimal revenue. The menu size turns out to "nail down" the revenue properties of deterministic auctions: their menu size may be at most exponential in the number of items and indeed their revenue may be larger than that achievable by the simplest types of auctions by a factor that is exponential in the number of items but no larger. Our model is related to a previously studied "unit-demand" model and our results also answer an open problem in that model.
' An agent needs to decide which of two available actions, A or B, to take. The agent's payoffs are such that A dominates B, i.e., taking A yields a better payoff than taking B, in every contingency. On the other hand, the agent's expected payoffs, given the action taken, are in the reverse order, i.e., E(payoff | B) > E(payoff | A) , which can happen if the probabilities of the various contingencies are not independent of the action being taken. What should the agent do? This dilemma has come to be known as Newcomb's Paradox (Nozick, 1969). The present essay shows that the rule "keep away, as much as possible, from any dominated action" is perfectly consistent with actually taking the dominated action, when appropriate. No paradox.''
Justin Valasek Rune Midjord, Tom s Rodr �guez-Barraquer. 2013. “Over-Caution of Large Committees of Experts”. Publisher's Version Abstract
In this paper, we demonstrate that payoffs linked to a committee member's individual vote may explain over-cautious behavior in committees. A committee of experts must decide whether to approve or reject a proposed innovation on behalf of society. In addition to a payoff linked to the adequateness of the committee's decision, each expert receives a disesteem payoff if he/she voted in favor of an ill-fated innovation. An example is FDA committees, where committee members can be exposed to a disesteem (negative) payoff if they vote to pass a drug that proves to be fatal for some users. We show that no matter how small the disesteem payoffs are, information aggregation fails in large committees: under any majority rule, the committee rejects the innovation almost surely. We then show that this inefficiency can be mitigated by pre-vote information pooling, but only if the decision is take under unanimity: in the presence of disesteem payoffs, committee members will only vote efficiently if they are all responsible for the final decision.
Avi Shmida Yoram Gerchman Nicka Chinkov Avi Koplovich Gadi Katzir Tamar Keasar, Miriam Kishinevsky. 2013. “Plant-Derived Visual Signals May Protect Beetle Herbivores from Bird Predators”. Publisher's Version Abstract
Insect herbivores often use chemical signals obtained from their food plants to deter enemies and/or attract sexual partners. Do plant-based visual signals act similarly, i.e., repel consumers' enemies and appeal to potential mates? We explored this question using the pollen-feeding beetle Pygopleurus israelitus (Glaphyridae), a specialized pollinator of Anemone coronaria's chemically defended red-morph flowers. We presented dead beetles, which had fed either on anemones or on cat-food, to young domestic chicks on a red (anemone-like) or a green (leaf-like) background. We determined whether the beetles' background color and diet affected the chicks' feeding. Cuticle surface extracts from anemone-fed beetles, but not from cat-food-fed beetles, contained a secondary metabolite characteristic of anemones. Latencies to the first picking-up and consuming of beetles from green backgrounds were shorter than of beetles from red backgrounds. The picking-up order of beetles also indicated that prey from the green background was preferred. The chicks retained this preference when re-tested, three days later. Handling times of anemone-fed beetles were longer than of cat-food-fed beetles. A previous study showed that glaphyrids improve their mate-finding prospects by orienting to large red anemone flowers. Here, female beetles preferred cat-food-fed to anemone-fed males in mate-choice assays, thus anemone-derived chemicals did not increase mating success. Instead, the combined results indicate that A. coronaria's red flowers provide a visual signal that may both deter its herbivore's predators and attract its mates. To our knowledge, this is the first experimental evidence for a potential protective role of plant-derived visual signals for insect herbivores/pollinators. Keywords: Predation; secondary metabolite; tritrophic interactions; warning coloration; domestic chick; Glaphyridae; pollination.
We consider the complexity of finding a Correlated Equilibrium in an n-player game in a model that allows the algorithm to make queries for players' utilities at pure strategy profiles. Many randomized regret-matching dynamics are known to yield an approximate correlated equilibrium quickly: in time that is polynomial in the number of players, n, the number of strategies of each player, m, and the approximation error, 1/?. Here we show that both randomization and approximation are necessary: no efficient deterministic algorithm can reach even an approximate equilibrium and no efficient randomized algorithm can reach an exact equilibrium.
We model constitutions by effectivity functions. We assume that the constitution is common knowledge among the members of the society. However, the preferences of the citizen are private information. We investigate whether there exist decision schemes (i. e., functions that map profiles of (dichotomous) preferences on the set of outcomes to lotteries on the set of social states), with the following properties: i) The distribution of power induced by the decision scheme is identical to the effectivity function under consideration; and ii) the (incomplete information) game associated with the decision scheme has a Bayesian Nash equilibrium in pure strategies. If the effectivity function is monotonic and superadditive, then we find a class of decision schemes with the foregoing properties. When applied to n-person games in strategic form, a decision scheme d is a mapping from profiles of (dichotomous) preferences on the set of pure strategy vectors to probability distributions over outcomes (or equivalently, over pure strategy vectors). We prove that for any feasible and individually rational payoff vector of a strategic game, there exists a decision scheme that yields that payoff vector as a (pure) Nash equilibrium payoff in the game induced by the strategic game and the decision scheme. This can be viewed as a kind of purification result.
We introduce asymptotic analysis of stochastic games with short-stage duration. The play of stage $k$, $k'geq 0$, of a stochastic game $'Gamma_'delta$ with stage duration $'delta$ is interpreted as the play in time $k'delta'leq t0$ as the stage duration $'delta$ goes to $0$, and study the asymptotic behavior of the value, optimal strategies, and equilibrium. The asymptotic analogs of the discounted, limiting-average, and uniform equilibrium payoffs are defined. Convergence implies the existence of an asymptotic discounted equilibrium payoff, strong convergence implies the existence of an asymptotic limiting-average equilibrium payoff, and exact convergence implies the existence of an asymptotic uniform equilibrium payoff.
Are good people motivated to behave in accordance the moral truth whatever it is? Michael Smith, who has named this motivation the de-dicto moral motivation, famously criticized it. According to Smith, good people are instead motivated directly by more concrete moral concerns, such as 'the well-being of their fellows, people getting what they deserve, justice, equality, and the like . Here I argue for the non-Smithian view that good people have (also) a de-dicto moral motivation. The argument runs roughly as follows: given that good people tend to behave appropriately, and that in some situations it is appropriate to reevaluate one s underived moral beliefs, good people tend to seriously reevaluate underived moral beliefs sometimes. Theories of motivation have to account for this fact (a point overlooked by Smith and his respondents). What motivates a good person to pay attention to evidence that is contrary to her underived moral beliefs? What does she aim for in reevaluating those beliefs? I argue that the view that good people are motivated to act morally de-dicto is in a better position to explain the relevant facts about good people s reevaluation of underived moral beliefs.