2005
Shahar Dobzinski, N. N., & Schapira, M. . (2005).
Truthful Randomized Mechanisms for Combinatorial Auctions.
Discussion Papers. presented at the 11. Retrieved from
/files/dp408.pdf Publisher's VersionAbstractWe design two computationally-efficient incentive-compatible mechanisms for combinatorial auctions with general bidder preferences. Both mechanisms are randomized, and are incentive-compatible in the universal sense. This is in contrast to recent previous work that only addresses the weaker notion of incentive compatibility in expectation. The first mechanism obtains an O('ˆ\v sm)-approximation of the optimal social welfare for arbitrary bidder valuations – this is the best approximation possible in polynomial time. The second one obtains an O(log2 m)-approximation for a subclass of bidder valuations that includes all submodular bidders. This improves over the best previously obtained incentive-compatible mechanism for this class which only provides an O('ˆ\v sm)-approximation.
Ein-Ya, G. . (2005).
Using Game Theory to Increase Students' Motivation to Learn Mathematics.
Discussion Papers. presented at the 2, Proceedings of the 4th Mediterranean Conference on Mathematics Education 2 (2005), 515-520. Retrieved from
/files/dp384.pdf Publisher's VersionAbstractThis paper reports an attempt to teach game theory in order to increase students' motivation to learn mathematics. A course in game theory was created in order to introduce students to new mathematical content presented in a different way.
Dreze, R. J. A., & H., J. . (2005).
When All Is Said and Done, How Should You Play and What Should You Expect?.
Discussion Papers. presented at the 3, Published as "Rational Expectations in Games," American Economic Review 98 (2008), 72-86. Retrieved from
/files/ 86.pdf Publisher's VersionAbstractModern game theory was born in 1928, when John von Neumann published his Minimax Theorem. This theorem ascribes to all two-person zero-sum games a value-what rational players may expect-and optimal strategies-how they should play to achieve that expectation. Seventy-seven years later, strategic game theory has not gotten beyond that initial point, insofar as the basic questions of value and optimal strategies are concerned. Equilibrium theories do not tell players how to play and what to expect; even when there is a unique Nash equilibrium, it it is not at all clear that the players "should" play this equilibrium, nor that they should expect its payoff. Here, we return to square one: abandon all ideas of equilibrium and simply ask, how should rational players play, and what should they expect. We provide answers to both questions, for all n-person games in strategic form.
Samuel-Cahn, E. . (2005).
When Should You Stop and What Do You Get? Some Secretary Problems.
Discussion Papers. presented at the 10, Published as "Optimal Stopping for I.I.D. Random Variables", Sequential Analysis 26 (2007), 395-401. Retrieved from
/files/dp407.pdf Publisher's VersionAbstract{A version of a secretary problem is considered: Let Xj
Flekser, J. A., & Oren, . (2005).
With the Eye Being a Ball, What Happens to Fixational Eye Movements in the Periphery?.
Discussion Papers. presented at the 5. Retrieved from
/files/dp390.pdf Publisher's VersionAbstractAlthough the fact that the eye is moving constantly has been known for a long time, the role of fixational eye movements (FEM) is still in dispute. Whatever their role, it is structurally clear that, since the eye is a ball, the size of these movements diminishes for locations closer to the poles. Here we propose a new perspective on the role of FEM from which we derive a prediction for a three-way interaction of a stimulus' orientation, location, and spatial frequency. Measuring time-to-disappearance for gratings located in the periphery we find that, as predicted, gratings located to the left and right of fixation fade faster when horizontal than when vertical in low spatial frequencies and faster when vertical than when horizontal in high spatial frequencies. The opposite is true for gratings located above and below fixation.
Nisan, N., & Schocken, S. . (2005).
The Elements of Computing Systems. The MIT Press.
In the early days of computer science, the interactions of hardware, software, compilers, and operating system were simple enough to allow students to see an overall picture of how computers worked. With the increasing complexity of computer technology and the resulting specialization of knowledge, such clarity is often lost. Unlike other texts that cover only one aspect of the field, The Elements of Computing Systems gives students an integrated and rigorous picture of applied computer science, as its comes to play in the construction of a simple yet powerful computer system.Indeed, the best way to understand how computers work is to build one from scratch, and this textbook leads students through twelve chapters and projects that gradually build a basic hardware platform and a modern software hierarchy from the ground up. In the process, the students gain hands-on knowledge of hardware architecture, operating systems, programming languages, compilers, data structures, algorithms, and software engineering. Using this constructive approach, the book exposes a significant body of computer science knowledge and demonstrates how theoretical and applied techniques taught in other courses fit into the overall picture.Designed to support one- or two-semester courses, the book is based on an abstraction-implementation paradigm; each chapter presents a key hardware or software abstraction, a proposed implementation that makes it concrete, and an actual project. The emerging computer system can be built by following the chapters, although this is only one option, since the projects are self-contained and can be done or skipped in any order. All the computer science knowledge necessary for completing the projects is embedded in the book, the only pre-requisite being a programming experience.The book's web site provides all tools and materials necessary to build all the hardware and software systems described in the text, including two hundred test programs for the twelve projects. The projects and systems can be modified to meet various teaching needs, and all the supplied software is open-source.
מפה, . (2005).
צמחי ישראל.
צמחי ישראל הוא מדריך שימושי המאפשר את זיהוים של צמחי הבר הנפוצים בישראל בדרך חזותית פשוטה. זהו כלי עזר רב-ערך וחיוני לכל מי שמתעניין בצמחי ישראל ומבקש להעמיק את ידיעותיו בתחום. לקסיקון מפה: צמחי ישראל מביא את תיאורם של החשובים והנפוצים בצמחי הארץ שהים מקובצים בקבוצות לפי צבע פרחיהם. כל ערך בלקסיקון מביא תצלום ותיאור מלא של הצמח בנדון בו, מפת תפוצה ו"סגרל פריחה" של הצמח. ומידע בסיסי על מינים קרובים לצמח. בסך הכל מובאים בספר תיאורים של יותר מ-800 מיני צמחים, ושל עוד כ-1,000 מיני צמחים קרובים להם. בנוסף לכך הספר מביא מידע מרתק על עוד נושאים רבים אחרים. מאסטרטגיות רביית והגנה של צמחים, דרך פרסום ורמייה בצמחים, ועד השימוש שניתן לעשות ברבים מהצמחים המתוארים בספר במטבח וברפואה.
2004
Olle Haggstrom, G. K., & Mossel, E. . (2004).
A Law of Large Numbers for Weighted Majority.
Discussion Papers. presented at the 6. Retrieved from
/files/dp363.pdf Publisher's VersionAbstractConsider an election between two candidates in which the voters' choices are random and independent and the probability of a voter choosing the first candidate is p > 1/2. Condorcet's Jury Theorem which he derived from the weak law of large numbers asserts that if the number of voters tends to infinity then the probability that the first candidate will be elected tends to one. The notion of influence of a voter or its voting power is relevant for extensions of the weak law of large numbers for voting rules which are more general than simple majority. In this paper we point out two different ways to extend the classical notions of voting power and influences to arbitrary probability distributions. The extension relevant to us is the "effect" of a voter, which is a weighted version of the correlation between the voter's vote and the election's outcomes. We prove an extension of the weak law of large numbers to weighted majority games when all individual effects are small and show that this result does not apply to any voting rule which is not based on weighted majority.
Toxvaerd, F. . (2004).
A Theory of Optimal Deadlines.
Discussion Papers. presented at the 5. Retrieved from
/files/dp357.pdf Publisher's VersionAbstractThis paper sets forth a model of contracting for delivery in an environment with time to build and adverse selection. The optimal contract is derived and characterized and it takes the form of a deadline contract. Such a contract stipulates a deadline for delivery for each possible type of agent efficiency. The optimal contract induces inefficient delay by using delivery time as a screening device. Furthermore, rents are decreasing in the agent's efficiency. In meeting the deadline, the agent's effort is strictly increasing over time, due to discounting. It is shown that increasing the project's gross value decreases delivery time, while the scale or difficulty of the project decreases it. Last, it is shown that the agent's rents are increasing in both project difficulty and gross project value.
Hart, S. . (2004).
Adaptive Heuristics.
Discussion Papers. presented at the 9, Econometrica 73 (2005), 1401-1430. Retrieved from
Publisher's VersionAbstractWe exhibit a large class of simple rules of behavior, which we call adaptive heuristics, and show that they generate rational behavior in the long run. These adaptive heuristics are based on natural regret measures, and may be viewed as a bridge between rational and behavioral viewpoints. The results presented here, taken together, establish a solid connection between the dynamic approach of adaptive heuristics and the static approach of correlated equilibria.
Ilan Guttman, O. K., & Kandel, E. . (2004).
Adding the Noise: A Theory of Compensation-Driven Earnings Management.
Discussion Papers. presented at the 3. Retrieved from
/files/dp355.pdf Publisher's VersionAbstractEmpirical evidence suggests that the distribution of earnings reports is discontinuous. This is puzzling since the distribution of true earnings is likely to be continuous. We present a model that rationalizes this phenomenon. In our model, managers report their earnings to rational investors, who price the stock accordingly. We assume that misreporting is costly, but since managers' compensation is based on the stock price, they may want to manipulate the reported earnings. The model fits into the general framework of signaling games with a continuum of types. The conventional equilibrium in this game is fully revealing (e.g. Stein 1989), and does not explain the observed discontinuity of earnings reports. We show that a partially pooling equilibrium exists in such games as well, and it generates an endogenous discontinuity in reports. By pooling reports of different types, the informed manager introduces "home-made" noise into his report. The resulting vagueness enables the manager to reduce the manipulation costs. While a priori pooling looks manipulative, it is actually a way to reduce earnings management. The empirical implications of our model relate earnings management and price reaction to price- and earnings-based compensation, growth opportunities of the firm, underlying volatility, and the stringency of accounting rules. We show that this equilibrium arises due to stock-based compensation of the managers, and does not arise when they are paid based on their earnings directly. Finally, we present a general version of this model describing the behavior of biased experts in many real-life situations.
Aumann, R. J., Lapides, I., H. Furstenberg,, & Witztum, D. . (2004).
Analyses of the Gans Committee Report.
Discussion Papers. presented at the 7. Retrieved from
/files/dp365.pdf Publisher's VersionAbstractThis document contains four separate analyses, each with a different author, of the "Gans" committee report on the Bible codes (DP 364 of the Center for the Study of Rationality, June 2004). The analyses appear in alphabetical order of the authors' names. Three of the authors were members of the committee; one, Doron Witztum, is active in Bible codes research. Two of the analyses-by Aumann and by Furstenberg-support the report of the committee; the other two-by Lapides and by Witztum-do not. This document contains material that was generated after the results of the committee's experiments became known; other than reporting the numerical results themselves,dp 364 contains only material generated before they became known.
Dreze, R. J. A., & H., J. . (2004).
Assessing Strategic Risk.
Discussion Papers. presented at the 5. Retrieved from
/files/dp361.pdf Publisher's VersionAbstractIn recent decades, the concept of subjective probability has been increasingly applied to an adversary s choices in strategic games. A careful examination reveals that the standard construction of subjective probabilities does not apply in this context. We show how the difficulty may be overcome by means of a different construction.
Ruma Falk, A. L., & Zamir, S. . (2004).
Average Speed Bumps: Four Perspectives on Averaging Speeds.
Discussion Papers. presented at the 7. Retrieved from
/files/dp367.pdf Publisher's Version Sudholter, B. P., & Peter, . (2004).
Bargaining Sets of Voting Games.
Discussion Papers. presented at the 12, (revised indp #410). Retrieved from
/files/dp376.pdf Publisher's VersionAbstractLet A be a finite set of m ¥ 3 alternatives, let N be a finite set of n ¥ 3 players and let Rn be a profile of linear preference orderings on A of the players. Throughout most of the paper the considered voting system is the majority rule. Let uN be a profile of utility functions for RN. Using $\pm$-effectiveness we define the NTU game VuN and investigate its Aumann-Davis-Maschler and Mas-Colell bargaining sets. The first bargaining set is nonempty for m = 3 and it may be empty for m¥ 4. Moreover, in a simple probabilistic model, for fixed m, the probability that the Aumann-Davis-Maschler bargaining set is nonempty tends to one if n tends to infinity. The Mas-Colell bargaining set is nonempty for m 5 and it may be empty for m ¥ 6. Moreover, we prove the following startling result: The Mas-Colell bargaining set of anysimple majority voting game derived from the k-th replication of RN is nonempty, provided that k ¥ n + 2.We also compute the NTU games which are derived from choice by plurality voting and approval voting, and we analyze some interesting examples.
Peleg, H. K., & Bezalel, . (2004).
Binary Effectivity Rules.
Discussion Papers. presented at the 12, Review of Economic Design 10 (2006), 167-181. Retrieved from
/files/dp378.pdf Publisher's VersionAbstractA social choice rule is a collection of social choice correspondences, one for each agenda. An effectivity rule is a collection of effectivity functions, one for each agenda. We prove that every monotonic and superadditive effectivity rule is the effectivity rule of some social choice rule. A social choice rule is binary if it is rationalized by an acyclic binary relation. The foregoing result motivates our definition of a binary effectivity rule as the effectivity rule of some binary social choice rule. A binary social choice rule is regular if it satisfies unanimity, monotonicity, and independence of infeasible alternatives. A binary effectivity rule is regular if it is the effectivity rule of some regular binary social choice rule. We characterize completely the family of regular binary effectivity rules. Quite surprisingly, intrinsically defined von Neumann-Morgenstern solutions play an important role in this characterization.
Michael Goldstein, Paul Irvine, E. K., & Wiener, Z. . (2004).
Brokerage Commissions and Institutional Trading Patterns.
Discussion Papers. presented at the 3. Retrieved from
/files/dp356.pdf Publisher's VersionAbstractWhy do brokers charge per-share commissions to institutional traders? What determines the commission charge? We examine commissions and order flow for a sample of institutional orders and find that most per-share commissions are concentrated at only a few price points, primarily 5 and 6 cents per share. Further, we find that the prior-period commission, rather than execution costs, is the strongest determinant of next period's commission. These results are inconsistent with negotiation of commissions on an order-by-order basis or with the impression of a continuous transaction cost that is deduced from the distribution of percentage commissions, suggesting that commissions are not a marginal cost of execution. We also find that institutional clients concentrate their order flow with a small set of brokers, and that small institutions concentrate more than large institutions. Collectively, our results suggest that brokers and their institutional clients enter into long-term agreements where the per-share commission is constant, and the order flow routed to a particular broker is used to maintain the required payment for an institution's desired level of service. Commissions, therefore, constitute a convenient way of charging a predetermined fixed fee for broker services.
D. Granot, H. Hamers, J. K., & Maschler, M. . (2004).
Chinese Postman Games on a Class of Eulerian Graphs.
Discussion Papers. presented at the 7. Retrieved from
/files/dp366.pdf Publisher's VersionAbstractThe extended Chinese postman (CP) enterprize is induced by a connected and undirected graph G. A server is located at some fixed vertex of G, to be referred to as the post office. Each player resides in a single edge, and each edge contains at most one player. Thus, some of the edges can be public . Each edge has a cost and a prize attached to it. The players need some service, e.g., mail delivery, which requires the server to travel from the post office and visit all edges wherein players reside, before returning to the post office. The server collects the prize attached to an edge upon the first traversal of this edge, but the cost of an edge is incurred every time it is traversed. The cost of a cheapest tour for each coalition defines a CP cost game. The issue is how to allocate, among the players, the cost that the server incurs. We study the class of extended CP enterprizes which are induced by Eulerian graphs satisfying two properties: The 4-cut property (Definition 4.4) and completeness (Definition 4.8). For this class we prove that the core, resp., the nucleolus when the core is not empty, are Cartesian products of the cores, resp., nucleoli of CP enterprizes whose graphs are simple cycles generated from G by identifying therein the end points of each elementary path (Definition 4.3). Finally, for the class of extended complete Eulerian graphs having the 4-cut property, we are able to test core membership in O(n) time, and when the core is not empty, we show how to calculate the nucleolus in O(n^2) time, n being the number of players.
Zvika Neeman, M. D. P., & Simhon, A. . (2004).
Corruption and Openness.
Discussion Papers. presented at the 3. Retrieved from
/files/dp353.pdf Publisher's VersionAbstractWe report an intriguing empirical observation. The relationship between corruption and output depends on the economy's degree of openness: in open economies, corruption and GNP per capita are strongly negatively correlated; but in closed economies, there is no relationship at all. This stylized fact is robust to a variety of different empirical specifications. In particular, the same basic pattern persists if we use alternative measures of openness, if we focus on different time periods, if we restrict the sample to include only highly corrupt countries, if we restrict attention to specific geographic areas or to poor countries, and if we allow for the possible endogeneity of both the corruption and openness measures. We find that the extent to which corruption affects output is determined primarily by the degree of financial openness. The difference between closed and open economies is mainly due to the different effect of corruption on capital accumulation. We present a model, consistent with these findings, in which the main channel through which corruption affects output is capital drain.
Kareev, K. F., & Yaakov, . (2004).
Does Decision Quality (Always) Increase with the Size of Information Samples? Some Vicissitudes in Applying the Law of Large Numbers.
Discussion Papers. presented at the 1, Journal of Experimental Psychology: Learning, Memory and Cognition 32 (2006), 883-903. Retrieved from
/files/dp347.pdf Publisher's VersionAbstractAdaptive decision-making requires that environmental contingencies between decision options and their relative advantages and disadvantages be assessed accurately and quickly. The research presented in this article addresses the challenging notion that contingencies may be more visible from small than large samples of observations. An algorithmic account for such a ""less-is-more"" effect is offered within a threshold-based decision framework. Accordingly, a choice between a pair of options is only made when the contingency in the sample that describes the relative utility of the two options exceeds a critical threshold. Small samples - due to their instability and the high dispersion of their sampling distribution - facilitate the generation of above-threshold contingencies. Across a broad range of parameter values, the resulting small-sample advantage in terms of hits is stronger than their disadvantage in terms of false alarms. Computer simulations and experimental findings support the predictions derived from the threshold model. In general, the relative advantage of small samples is most apparent when information loss is low, when decision thresholds are high, and when ecological contingencies are weak to moderate.