Existential Risk Persuasion Tournament (XPT)

The XPT explores potential threats to humanity in this century, with a focus on artificial intelligence, biosecurity, climate, and nuclear arms. In the first tournament, over 200 experts and highly skilled forecasters worked individually and in teams to craft forecasts and persuasive explanations. The initial tournament ran from June-October 2022.

Find more about the XPT, including the full policy report for the 2022 tournament, here.

Adversarial collaboration on AI risk

The adversarial collaboration on AI risk followed the Existential Risk Persuasion Tournament, and featured more intensive interaction between parties that disagreed about the threat from AI in this century. Participants then worked collaboratively to create “crux” forecasting questions which would best elucidate the disagreement between differing viewpoints. This provided a richer understanding of the large differences in participants’ AI risk forecasts seen in the 2022 XPT.

Find more about the AI adversarial collaboration project, including the full report, here.

AI Conditional Trees

Our team is exploring Conditional Forecast Trees: a way of identifying and visualizing key indicators for long-run or complex outcomes. We are conducting dozens of interviews with AI experts to find the short-run indicators that they believe best predict long-run negative effects of AI on humanity. We will then elicit unconditional and conditional forecasts of these nodes to measure the implied ‘predictiveness’ of the most incisive indicators of long-run outcomes. Our goal is to produce a tree of conditional probabilities that is maximally informative about AI risk.

More projects in the works

  • Forecasting Proficiency Test: We are designing a one-hour assessment to identify talented forecasters in a general population.

  • Intersubjective Metrics: To help us incentivize accuracy on long-run and unresolvable questions, we will explore, classify, and test intersubjective metrics like reciprocal scoring.

  • Epistemic Reviews: We’re working closely with policymakers and nonprofit organizations to assess how forecasting tools could help them reduce uncertainty, identify action-relevant disagreement, and guide their decision processes.

  • Project Improbable: Elicitation of forecasts in low probability ranges (<10%) is relatively unstudied, and may require very different strategies from typical elicitation contexts. We’re exploring ways of reducing noise in forecaster judgments of low probability events.

  • Team Dynamics: Teams of forecasters produce more accurate predictions than individual forecasters, but at what point does enlarging a team fail to improve performance? How much time is the optimal amount to spend per forecasting question? We’re running a large-scale RCT to find out how best to allocate team resources for forecasting.

Highlighted publications

Improving Judgments of Existential Risk: Better Forecasts, Questions, Explanations, Policies
Karger, E., Atanasov, P., Tetlock, P. Available at SSRN (January 5, 2022).

Reciprocal Scoring: A Method for Forecasting Unanswerable Questions
Karger, E., Monrad, J., Mellers, B., Tetlock, P. Available at SSRN (2021).

A Better Crystal Ball: The Right Way to Think About the Future
Scoblic, P., Tetlock, P. Foreign Affairs (November/December 2020).

Find a list of all of our publications here.

Our work is supported by grants from Open Philanthropy and other philanthropic foundations.