This website uses cookies to help us give you the best experience when you visit our website. By continuing to use this website, you consent to our use of these cookies.
Death registration completeness, the share of deaths captured by countries’ vital registration systems, vary substantially across countries. Estimates of completeness, even recent ones, are outdated or contradictory for many countries.
OBJECTIVE
We derive the most up-to-date and consistent estimates of death registration completeness in as many countries as possible.
RESULTS
Death registration is complete in Europe, North America, and other developed countries. In developing countries, completeness varies by region. While some have complete death registration, many countries completeness ranges from 40% to 75%. Regionally, Africa has lowest death registration completeness, and in many countries no registration data was located. In Latin America and Asia, several countries have improved their registration compared to previously available estimates.
CONTRIBUTION
This paper presents the publicly available International Completeness of Death Registration (ICDR) dataset: https://github.com/akarlinsky/death registration ICDR contains the annual amount of deaths registered and death registration completeness in 193 countries from 2015 to 2019.
Governmental information manipulation has been hard to measure and study systematically. We hand-collect data from official and unofficial sources in 134 countries to estimate misreporting of Covid mortality during 2020-21. We find that between 45%–55% of governments misreported the number of deaths. The lion's share of misreporting cannot be attributed to a country's capacity to accurately diagnose and report deaths. Contrary to some theoretical expectations, there is little evidence of governments exaggerating the severity of the pandemic. Misreporting is higher where governments face few social and institutional constraints, in countries holding elections, and in countries with a communist legacy.
Who and how many died in the 2020 Karabakh War? With limited evidence provided by authorities, media outlets, and human rights organizations, still little is known about the death toll caused by the 44-day conflict in and around Nagorno-Karabakh. This paper provides a first assessment of the human cost of the war. Using age–sex vital registration data from Armenia, Azerbaijan, and the de facto Republic of Artsakh/Nagorno-Karabakh, we difference the 2020 observed mortality values from expected deaths based on trends in mortality between 2015 and 2019 to offer sensible estimates of excess mortality resulting from the conflict. We compare and contrast our findings with neighboring peaceful countries with similar mortality patterns and socio-cultural background and discuss them against the backdrop of the concurrent first wave of Covid-19. We estimate that the war led to almost 6,500 excess deaths among people aged 15–49. Nearly 2,800 excess losses occurred in Armenia, 3,400 in Azerbaijan, and 310 in de facto Artsakh. Deaths were highly concentrated among late adolescent and young adult males, suggesting that most excess mortality was directly related to combat. Beyond the human tragedy, for small countries like Armenia and Azerbaijan, such loss of young men represents a considerable long-term cost for future demographic, economic, and social development.
The World Health Organization has a mandate to compile and disseminate statistics on mortality, and we have been tracking the progression of the COVID-19 pandemic since the beginning of 2020. Reported statistics on COVID-19 mortality are problematic for many countries owing to variations in testing access, differential diagnostic capacity and inconsistent certification of COVID-19 as cause of death. Beyond what is directly attributable to it, the pandemic has caused extensive collateral damage that has led to losses of lives and livelihoods. Here we report a comprehensive and consistent measurement of the impact of the COVID-19 pandemic by estimating excess deaths, by month, for 2020 and 2021. We predict the pandemic period all-cause deaths in locations lacking complete reported data using an overdispersed Poisson count framework that applies Bayesian inference techniques to quantify uncertainty. We estimate 14.83 million excess deaths globally, 2.74 times more deaths than the 5.42 million reported as due to COVID-19 for the period. There are wide variations in the excess death estimates across the six World Health Organization regions. We describe the data and methods used to generate these estimates and highlight the need for better reporting where gaps persist. We discuss various summary measures, and the hazards of ranking countries’ epidemic responses.
Recent work highlights that identification of present bias using task-completion data is problematic. In this note, we add to the literature in two ways. First, whereas prior work considers single-deadline tasks, we consider tasks for which there is a series of deadlines, with incremental penalties for missing each deadline. Second, we also consider identification of forgetting from the same type of data. Using numerical examples, we demonstrate that identification of present bias is still problematic even when there are multiple deadlines. Identification of forgetting perhaps holds more promise theoretically, although in practice we suspect it too is problematic.
A menu description defines a mechanism to player i in two steps. Step (1) uses the reports of other players to describe i's menu: the set of i's potential outcomes. Step (2) uses i's report to select i's favorite outcome from her menu. Can menu descriptions better expose strategyproofness, without sacrificing simplicity? We propose a new, simple menu description of Deferred Acceptance. We prove that—in contrast with other common matching mechanisms—this menu description must differ substantially from the corresponding traditional description. We demonstrate, with a lab experiment on two simple mechanisms, the promise and challenges of menu descriptions.
Can incorporating expectations-based-reference-dependence (EBRD) considerations reduce seemingly dominated choices in the Deferred Acceptance (DA) mechanism? We run two experiments (total N = 500) where participants are randomly assigned into one of four DA variants—{static, dynamic} × {student proposing, student receiving}—and play ten simulated large-market school assignment problems. While a standard, reference-independent model predicts the same straightforward behavior across all problems and variants, a news-utility EBRD model predicts stark differences across variants and problems. As the EBRD model predicts, we find that (i) across variants, dynamic student receiving leads to significantly fewer deviations from straightforward behavior, (ii) across problems, deviations increase with competitiveness, and (iii) within specific problems, the specific deviations predicted by the EBRD model are indeed those commonly observed in the data.
We propose and initiate the study of privacy elasticity—the responsiveness of economic variables to small changes in the level of privacy given to participants in an economic system. Individuals rarely experience either full privacy or a complete lack of privacy; we propose to use differential privacy—a computer-science theory increasingly adopted by industry and government—as a standardized means of quantifying continuous privacy changes. The resulting privacy measure implies a privacy-elasticity notion that is portable and comparable across contexts. We demonstrate the feasibility of this approach by estimating the privacy elasticity of public-good contributions in a lab experiment.
We investigate the relationship between (a) official information on COVID-19 infection and death case counts; (b) beliefs about such case counts, at present and in the future; (c) beliefs about average infection chance—in principle, directly calculable from (b); and (d) self-reported health-protective behavior. We elicit (b), (c), and (d) with a daily online survey in the US from March to August 2020 (N =~ 13,900). Beliefs about future infection cases are closely related to official information, but are inconsistent with beliefs about infection chances—risk perceptions—which are better predictors of reported behavior. We discuss potential implications for public communication of health-risk information.
From February to April 2020, as COVID-19 hit the U.S. economy, the official unemployment rate (UR) climbed from 3.5 percent—the lowest in more than 50 years—to 14.7—the highest since measurement began in January 1948. This unprecedented, speedy quadrupling of UR coincided with major disruptions in survey-data-collection procedures and a dramatic, differential drop in response rates.
To what extent did measurement issues contribute to this quadrupling? We revisit two recently studied potential biases in the Current Population Survey: rotation group bias (Krueger, Mas and Niu, 2017) and difficulty-of-reaching bias (Heffetz and Reeves, 2019). We extend the original analyses to the years prior to the crisis and focus on the six months of peak UR, from April to September 2020. Our ballpark estimates suggest that the peak official UR figure could be biased by up to ~1.5 percentage points in either direction.
Expenditure visibility—the extent to which a household's spending on a consumption category is noticeable to others—is measured in three new surveys, with ~3,000 telephone and online respondents. Visibility shows little change across time (ten years) and survey methods. Four different notions, or dimensions, of visibility are measured: the noticeability of above-average spending on a category; that of below-average spending; and the positivity/negativity of impressions made by above- and below-average spending. Jointly, these visibility measures explain up to three quarters or more of the observed variation in total-expenditure elasticities across consumption categories in U.S. data. Possible theoretical explanations are explored.
Groups of people in pain evoke our empathic reactions. Yet how does one empathize with a group? Here, we aim to identify psychological mechanisms that underlie empathic reactions to groups. We theorize that because empathy is an egocentric process routed through the self, people are strongly attuned to the impact on each individual, and less so to the number of individuals affected. In five pre-registered experiments, involving different types of stimuli and valences of the outcomes, we repeatedly find that participants’ level of empathy depends on the pain experienced by each individual, but not on the number of individuals in the group. The experiments support our hypothesis and rule out alternative explanations such as limited numeric ability and strategic regulation of negative emotions, providing valuable insights into the phenomenon of scope insensitivity. The findings also bear implications for the ongoing debate on the role of empathy in public policy decisions.
Advisors face a conflict of interest when their interests and those of the recipients of their advice are misaligned. Conflicted advisors need to resolve the tension between two competing motives, the need to provide sincere advice that fulfills the recipient’s goals and the temptation to give advice that caters to their self-interest. We theorized that the choice context should affect selfish advice-giving. Our basic experimental condition presented the advisor with two alternative recommendations, one optimal for the recipient, and one preferable for the advisor. We hypothesized that introducing a third (inferior) alternative (in the context condition) should increase the advisor’s tendency to give selfish advice. In Study 1, advisors who were instructed to transmit a recommendation to an anonymous recipient, were more selfish in the context than in the basic condition. Study 2 further found that the effect was obtained when the third alternative was strictly dominated by the selfish recommendation. Studies 3-4 tested the idea that forewarning advisors about the need to explain their choices should moderate the effect. Study 5 tested the advisors’ awareness of the context effect. Studies 6-7 investigated the reactions of advice recipients and social observers to selfish advice-giving and found them also biased by context. Our theoretical account posits a reference-based evaluation process. This mechanism explains the advisors’ tendency to give selfish advice as well as the social actors’ reactions to the transmission of such advice. We discuss the context effect in relation to the asymmetric dominance effect, social preferences, and ethical decision making.
We investigate individual decisions that produce gains for oneself, while imposing losses on a group of others. We theorize, based on the notion of empathy, that decision-makers consider the magnitude of the pain or loss they inflict on an individual in the group, but are largely insensitive to the number of individuals in the group who suffer losses. Studies involving personal choices or judgments of others’ choices largely confirmed these predictions. They also revealed a dispersion effect, whereby participants made more selfish choices, and judged others’ selfish choices more lightly, when the social losses were dispersed more thinly across a group. It appears that decision-makers’ empathy for others who suffer losses is not readily adjusted to the number of people affected or to the aggregated losses. It also appears that empathy mediates judgments of selfish behavior. The findings are related to theories of empathy, and decisions under conflicts of interest.
פול גרייס הסביר כיצד הסקים משיח שונים מהסקים לוגיים מטיעונים. בפרט אפשר באמצעות תורתו להבין כיצד ניתן לרמות בלי לשקר. יש בכך כדי להאיר את עיסקת הטיעון של אריה דרעי. האם שיקר? האם רימה?
Consider the problem of maximizing the revenue from selling a number of heterogeneous goods to a single buyer whose private values for the goods are drawn from a (possibly correlated) known distribution, and whose valuation for the goods is additive. It is already known that when there are two (or more) goods, simple mechanisms may yield only a negligible fraction of the optimal revenue. This thesis compares revenues from various classes of mechanisms to revenues from the two simplest mechanisms — selling the goods separately and selling them as a bundle — by using previously defined tools, namely, multiple of separated revenue (MoS) and multiple of bundled revenue (MoB). We show in particular that monotonic mechanisms cannot yield more than times the separated revenue (where is the number of goods), and obtain bounds on the revenue of deterministic mechanisms.
Maximizing the revenue from selling two or more goods has been shown to require the use of nonmonotonic mechanisms, where a higher-valuation buyer may pay less than a lower-valuation one. Here we show that the restriction to monotonic mechanisms may not just lower the revenue, but may in fact yield only a negligible fraction of the maximal revenue; more precisely, the revenue from monotonic mechanisms is no more than k times the simple revenue obtainable by selling the goods separately, or bundled (where k is the number of goods), whereas the maximal revenue may be arbitrarily larger. We then study the class of monotonic mechanisms and its subclass of allocation-monotonic mechanisms, and obtain useful characterizations and revenue bounds.
A stumper is a riddle whose solution is typically so elusive that it does not come to mind, at least initially – leaving the responder stumped. Stumpers work by eliciting a (typically visual) representation of the narrative, in which the solution is not to be found. In order to solve the stumper, the blocking representation must be changed, which does not happen to most respondents. I have collected all the riddles I know at this time that qualify, in my opinion, as stumpers. I have composed a few, and tested many. Whenever rates of correct solutions were available, they are included, giving a rough proxy for difficulty.
Forecasters should be tested by the Brier score and not just by the calibration score, which can always be made arbitrarily small. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to expertise. This raises the question of whether one can gain calibration without losing expertise, which we refer to as calibeating. We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.