Interdisciplinary Research Cluster
First Semester (2025-2026) | Wednesdays
The Federmann Center for the Study of Rationality is hosting the Tech Accountability interdisciplinary research cluster for Semester Aleph of the 2025-2026 academic year.
The cluster will host one seminar, panel, or event at the Center that will be open to the public during the semester. Further details regarding this event will be published closer to the date.
The group meets weekly on Wednesdays at the Center.
Cluster Members
The Tech Accountability cluster is led by Ido Sivan-Sevilla, joined by Amir Feder and Katrina Ligett.

Ido Sivan-Sevilla (Cluster Leader) Senior Lecturer at the MATAR Program (School of Computer Science & Engineering) & the Federmann School of Public Policy and Governance, HUJI. Ido develops measurements and theories on how society is governed across a range of cybersecurity, privacy, information integrity, and machine learning problems.

Amir Feder Assistant Professor of Computer Science at HUJI & Research Scientist at Google (part-time). Amir works on language models and causal inference, often for applications in computational social science. His research develops methods that integrate causality into language models and facilitate scientific inquiry with text data. Previously, Amir was a postdoctoral fellow at Columbia University.

Katrina Ligett Professor in the School of Computer Science and Engineering, HUJI. Katrina is the director of the Federmann Center for the Study of Rationality, an affiliated faculty member and former head of the MATAR program, and an affiliate of the Federmann Cyber Security Research Center. Before joining Hebrew University, she was faculty in computer science and economics at Caltech. Her primary research interests are in data privacy, algorithmic fairness, machine learning theory, and algorithmic game theory.
Events
1. Tech Accountability Seminar (Sunday, 28/12/2025 at 10:30)
"Why Language Models Hallucinate" by Dr. Adam Tauman Kalai (OpenAI)
Abstract:
Large language models sometimes generate statements that are plausible but factually incorrect—a phenomenon commonly called "hallucination." We argue that these errors are not mysterious failures of architecture or reasoning, but rather predictable consequences of standard training and evaluation incentives.
We show (i) that hallucinations can be viewed as classification errors: when pretrained models cannot reliably distinguish a false statement from a true one, they may produce the false option rather than saying I don't know; (ii) that optimization of benchmark performance encourages guessing rather than abstaining, since most evaluation metrics penalize expressing uncertainty; and (iii) that a possible mitigation path lies in revising existing benchmarks to reward calibrated abstention, thus realigning incentives in model development.
Joint work with Santosh Vempala (Georgia Tech) and Ofir Nachum & Edwin Zhang (OpenAI).
Contact
Researchers interested in engaging with the Tech Accountability cluster are invited to reach out to the cluster leader, Ido Sivan-Sevilla, at sevilla@cs.huji.ac.il.
Interdisciplinary Research Cluster
Second Semester (2025-2026) | Mondays
The Federmann Center for the Study of Rationality is hosting the "Large Language Models (LLMs) for studying Values, Taste and Perceptions" interdisciplinary research cluster for Semester bet of the 2025-2026 academic year.
The cluster will host one seminar, panel, or event at the Center that will be open to the public during the semester. Further details regarding this event will be published closer to the date.
The group meets weekly on Mondays at the Center.
