Objective: To evaluate whether full-term deliveries resulting in neonates diagnosed with hypoxic-ischemic encephalopathy are associated with a significant increase in the rate of subsequent unscheduled cesarean deliveries. Methods: We conducted a retrospective chart review study and examined all deliveries in the department of Obstetrics and Gynecology at Hadassah University Hospital, Mt. Scopus campus, Jerusalem, Israel during 2009-2014. We reviewed all cases of hypoxic-ischemic encephalopathy in singleton, term, liveborn deliveries and identified seven such cases: three of which were attributed to obstetric mismanagement and four which were not. We measured the rate of unscheduled cesarean deliveries before and after the events and their respective hazard ratio (HR). Results: Prior to a mismanaged delivery resulting in hypoxic-ischemic encephalopathy, the baseline rate of unscheduled cesarean deliveries was approximately 80 unscheduled cesarean deliveries for every 1,000 deliveries. In the first 4 weeks immediately after each of the three identified cases, there was a significant increase in the rate of unscheduled cesarean deliveries by an additional 48 unscheduled cesarean deliveries per 1,000 deliveries (95% CI 27-70/1,000). This increase was transient and lasted approximately 4 weeks. We estimated that each case was associated with approximately 17 additional unscheduled cesarean deliveries (95% confidence interval 8-27). There was no increase in the rate of unscheduled cesarean deliveries in cases of hypoxic-ischemic encephalopathy that were not associated with mismanagement. Conclusion: The increase in the rate of unscheduled cesarean deliveries after a catastrophic neonatal outcome may result in short-term changes in obstetricians risk evaluation.
Separate selling of two independent goods is shown to yield at least 62% of the optimal revenue, and at least 73% when the goods satisfy the Myerson regularity condition. This improves the 50% result of Hart and Nisan (2017, originally circulated in 2012).
We examine a solution concept, called the value,  for n-person strategic games. In applications, the value provides an a-priori assessment of the monetary worth of a player s position in a strategic game, comprising not only the player s contribution to the total payoff but also the player s ability to inflict losses on other players. A salient feature is that the value takes account of the costs that spoilers  impose on themselves. Our main result is an axiomatic characterization of the value. For every subset, S, consider the zero-sum game played between S and its complement, where the players in each of these sets collaborate as a single player, and where the payoff is the difference between the sum of the payoffs to the players in S and the sum of payoffs to the players not in S. We say that S has an effective threat if the minmax value of this game is positive. The first axiom is that if no subset of players has an effective threat then all players are allocated the same amount. The second axiom is that if the overall payoff to the players in a game is the sum of their payoffs in two unrelated games then the overall value is the sum of the values in these two games.
Abraham Neyman Elon Kohlberg. 2017. “Games of Threats”. Publisher's Version Abstract
A game of threats on a finite set of players, $N$, is a function $d$ that assigns a real number to any coalition, $S subseteq N$, such that $d left( S right) = - d left( N setminus S right)$. A game of threats is not necessarily a coalitional game as it may fail to satisfy the condition $d left( emptyset right) = 0$. We show that analogs of the classic Shapley axioms for coaltional games determine a unique value for games of threats. This value assigns to each player an average of the threat powers, $d left( S right)$, of the coalitions that include the player.
The dominant view of cognitive aging holds that while controlled processes (e.g., working memory and executive functions) decline with age, implicit (automatic) processes do not. In this paper we challenge this view by arguing that high-level automatic processes (e.g., implicit motivation) decline with age, and that this decline plays an important and as yet unappreciated role in cognitive aging. Specifically, we hypothesized that due to their decline, high-level automatic processes are less likely to be spontaneously activated in old age, and so their subtle, external activation should have stronger effects on older (vs. younger) adults. In two experiments we used different methods of implicitly activating motivation, and measured executive functions of younger and older adults via the Wisconsin Card Sorting Test. In Experiment 1 we used goal priming to subtly increase achievement motivation. In Experiment 2 motivation was manipulated by subtly increasing engagement in the task. More specifically, we introduce the Jerusalem Face Sorting Test (JFST), a modified version of the WCST that uses cards with faces instead of geometric shapes. In both experiments, implicitly induced changes in motivation improved older- but not younger- adults executive functioning. The framework we propose is general, and it has implications as to how we view and test cognitive functions. Our case study of older adults offers a new look at various aspects of cognitive aging. Applications of this view to other special populations (e.g., ADHD, schizophrenia) and possible interventions are discussed.
A repeat voting procedure is proposed, whereby voting is carried out in two identical rounds. Every voter can vote in each round, the results of the first round are made public before the second round, and the final result is determined by adding up all the votes in both rounds. It is argued that this simple modification of election procedures may well increase voter participation and result in more accurate and representative outcomes.
Shmuel Zamir Bezalel Peleg. 2017. “Sequential Aggregation of Judgments”. Publisher's Version Abstract
We consider a standard model of judgment aggregation as presented, for example, in Dietrich (2015). For this model we introduce a sequential aggregation procedure (SAP) which uses the majority rule as much as possible. The ordering of the issues is assumed to be exogenous. The exact definition of SAP is given in Section 3. In Section 4 we construct an intuitive relevance relation for our model, closely related to conditional entailment. Unlike Dietrich (2015), where the relevance relation is given exogenously as part of the model, we require that the relevance relation be derived from the agenda. We prove that SAP has the property of independence of irrelevant issues (III) with respect to (the transitive closure of) our relevance relation. As III is weaker than the property of proposition-wise independence (PI) we do not run into impossibility results as does List (2004) who incorporates PI in some parts of his analysis. We proceed to characterize SAP by anonymity, restricted monotonicity, local neutrality, restricted agenda property, and independence of past deliberations (see Section 5 for the precise details). Also, we use this occasion to show that Roberts s (1991) characterization of choice by plurality voting can be adapted to our model.
Meir Barneron Ilan Yaniv Johannes M¼ller-Trede, Shoham Choshen-Hillel. 2017. “Wisdom of Crowds in Matters of Taste, The”. Publisher's Version Abstract
Decision makers can often improve the accuracy of their judgments on factual matters by consulting 'crowds  of others for their respective opinions. In this article, we investigate whether decision makers could similarly draw on crowds to improve the accuracy of their judgments about their own tastes and hedonic experiences. We present a theoretical model which states that accuracy gains from consulting a crowds judgments of taste depend on the interplay among taste discrimination, crowd diversity, and the similarity between the crowd s preferences and those of the decision maker. The model also delineates the boundary conditions for such 'crowd wisdom.  Evidence supporting our hypotheses was found in two laboratory studies in which decision makers made judgments about their own enjoyment of musical pieces and short films. Our findings suggest that, although different people may have different preferences and inclinations, their judgments of taste can benefit from the wisdom of crowds.
In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore one needs to model the participants using a different decision model. To this end, we present the MINthenMAX decision model under ambiguity. This model is a refinement of Wald s MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize MINthenMAX as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present here also follows the common practice in computer science of worst-case analysis. Finally, we define and analyze the corresponding equilibrium concept, when all players follow MINthenMAX. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists, and we analyze the equilibria.
Abstract: Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings. This paper explores, from a normative point of view, one of the limits that ought to be imposed on the capacity of corporations to be treated "as if" they had a human nature, their recognition as legitimate bearers of basic human rights. The assertion that corporations, like living persons, are entitled to constitutional protection was famously brought to the fore by a number of recent Supreme Court cases, most notably the Citizens United and the Hobby Lobby cases. In the rational choice analysis that follows this paper reveals that the new jurisprudence emanating from Citizens United may be justified in the relatively insignificant cases of small companies with egalitarian distribution of shares, but ought to be rejected in the more meaningful cases of large public corporations with controlling stockholders. The ruling in Hobby Lobby, on the other hand, can be defended regardless of the size of the corporation or the composition of its owners. In both of these cases it is not the rights of the corporate entity which is truly at stake and the final outcome ought to hinge on the constitutional rights of real human beings.
Eyal Winter Uriel Procaccia. 2016. “Corporate Crime and Plea Bargains”. Publisher's Version Abstract
Corporate entities enjoy legal subjectivity in a variety of forms, but they are not human beings, and hence their legal capacity to bear rights and obligations of their own is not universal. This paper explores, from a normative point of view, one of the limits that ought to be set on the capacity of corporations to act "as if" they had a human nature, their capacity to commit crime. Accepted wisdom has it that corporate criminal liability is justified as a measure to deter criminal behavior. Our analysis supports this intuition in one subset of cases, but also reveals that deterrence might in fact be undermined in another subset of cases, especially in an environment saturated with plea bargains involving serious violations of the law.
Barton L. Lipman Elchanan Ben-Porath, Eddie Dekel. 2016. “Disclosure and Choice”. Publisher's Version Abstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
Elchanan Ben-Porath. 2016. “Disclosure and Choice”. Publisher's Version Abstract
An agent chooses among projects with random outcomes. His payoff is increasing in the outcome and in an observer's expectation of the outcome. With some probability, the agent can disclose the true outcome to the observer. We show that choice is inefficient: the agent favors riskier projects even with lower expected returns. If information can be disclosed by a challenger who prefers lower beliefs of the observer, the chosen project is excessively risky when the agent has better access to information, excessively riskaverse when the challenger has better access, and efficient otherwise. We also characterize the agent's worst-case equilibrium payoff.
An evolutionarily stable strategy (ESS) is an equilibrium strategy that is immune to invasions by rare alternative (mutant) strategies. Unlike Nash equilibria, ESS do not always exist in finite games. In this paper we address the question of what happens when the size of the game increases: does an ESS exist for almost every  large game? We let the entries of an n –- n game matrix be independently randomly chosen according to a symmetrical subexponential distribution F, and study the expected number of ESS with support of size d as n †’ ˆ\v z. In a previous paper by Hart, Rinott and Weiss [6] it was shown that this limit is 1 2 for d = 2. This paper deals with the case of d ¥ 4, and proves the conjecture in [6] (Section 6,c), that the expected number of ESS with support of size d ¥ 4 is 0. Furthermore, it discusses the classic problem of the number of facets of a convex hull of n random points in Rd, and relates it to the above ESS problem. Given a collection of i.i.d. random points, our result implies that the expected number of facets of their convex hull converges to 2d as n †’ ˆ\v z.
The Gibbard-Satterthwaite Impossibility Theorem (Gibbard, 1973; Satterthwaite, 1975) holds that dictatorship is the only unanimous and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single peaked preferences, house matching, and of quasilinear preferences, there are appealing unanimous and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, recently introduced by Li (2015). A social choice function is obviously strategyproof implementable (OSP) implementable if even cognitively limited agents can recognize their strategies as weakly dominant. For single-peaked preferences, we characterize the class of OSP-implementable and unanimous social choice rules as dictatorships with safeguards against extremism mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome which is too extreme. Median voting is consequently not OSP-implementable. Indeed the only OSP-implementable quantile rules either choose the minimal or the maximal ideal point. For house matching, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li (2015) shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove.
There is a long history of experiments, in which participantsare instructed to generate a long sequence of binary random numbers.The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper,we usedgeneralized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, weused logistic regression analysis in order to characterize the temporal sequence of participants’ choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seem irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect area monotonousdecreasing function of the delay, yet these individual sequential effectsare largely averaged outin a population analysis because of heterogeneity.The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation.Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in therandom sequence generation task, different participants adoptdifferent cognitive strategiesto suppress sequential dependencies when generating the “random” sequences.
This philosophical work lays the groundwork for a game-theoretic account of (romantic) love, substantiating the folk-psychological conception of love as 'a unification of souls'. It does so by setting up an appropriate universal framework of cognitive agency, that accommodates such unifications and motivates them. This framework applies the gene s eye view of evolution to the evolution of cognition, integrating it with a distributed, dynamic theory of selfhood "and the game-theoretic principles of agent-unification that govern these dynamics. The application of this framework to particular biological settings produces love as a theoretical evolutionary prediction (unveiling its rationality). Through this, the connection of the strategic normativity to love's real-life behavioral and phenomenological expressions is systematically explored.
Following Dietrich (2014) we consider using choice by plurality voting (CPV) as a judgment aggregation correspondence. We notice that a result of Roberts (1991) implies that CPV is axiomatically characterized by anonymity, neutrality, unanimity, and (Young s) reinforcement. Following List (2004) and Dietrich (2015) we construct a sequential voting procedure of judgement aggregation which satisfies rationality, anonymity, unanimity, and independence of irrelevant propositions (with respect to a relevance correspondence that does not satisfy transitivity). We offer a tentative characterization for this aggregation procedure
Uzi Segal David Heyd, Uriel Procaccia. 2016. “Thorny Quest for a Rational Constitution, The”. Publisher's Version