A | B | C | D | E | F | G | H | L | M | N | O | P | Q | R | S | V | W | X | Y | Z | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1 | Use me to compute value of information | How to use this sheet To edit this sheet you will need to make a copy ("File" -> "Make a copy"). Then, to use the calculator: In Column C, enter the person's forecast that the ultimate question (U) resolves positively In Column D, enter the person's forecast that the crux question (C) resolves positively In Column E, enter the person's probability that U resolves positively conditional on C resolving positively Detailed walkthrough: Value of Information (VOI) is a measure of how much knowing the answer to a question would change an individual's belief, in expectation. This is useful for understanding why individuals believe what they believe and what would change their minds. Use this tab to compute how much knowing whether some event C will happen will change someone's forecast on the ultimate question, U. In row 3, we are computing how much knowing the answer to "AI writes AI" (a question about AI being autonomously improving and deploying AI software by 2030) will change Pascal's forecast of how likely it is that AI causes an existential catastrophe by 2100. In 3C, we see that Pascal's P(U) is 9.00%: they think there is a 9.00% chance that AI causes an existential catastrophe by 2100. In 3D, we see that Pascal's P(C) is 75%: they think there is a 75% chance that AI will autonomously improve and deploy AI software by 2030. In 3E, we have Pascal's P(U|C); that is, how likely they think U is conditional on C happening. Pascal's P(U|C) is 10.00%, meaning that, if they knew that "AI writes AI" will happen, they would think that there's a 10.00% of AI causing an existential catastrophe by 2100. With that information that we input about Pascal's beliefs, the sheet calculates the following: Row F: P(U|¬C) This is Pascal's probability that the ultimate question (AI caused existential catastophe) happens, conditional on "AI writes AI" not happening. Row G: Level VOI This is this question's value of information for Pascal, computed in the simplest way: it shows how informative C will be to Pascal in expectation, weighing the update they will make if it happens and the update they will make if it doesn't by how likely it is to happen. Column H: Log VOI This is this question's value of information for Pascal, computed using a log-based formula. This version may be more useful when working with very small or very large probabilities. Column L: % of max VOI This is how informative this question is to Pascal, as a percentage of the total information gain possible for Pascal. If they started out 99% sure that U would happen, then their maximum possible information gain would be very small, because they are nearly sure they know what will happen. If they started out with a forecast of 50%, they have much more uncertainty. This may be useful for comparing how informative different questions are between people who have very different forecasts on P(U). You can find an R package for VOI/VOD here: https://github.com/forecastingresearch/voivod | |||||||||||||||||||
2 | Person/group | Question (C) | P(U) | P(C) | P(U|C) | P(U|¬C) | Level VoI | Log VoI | % of Max VoI | ||||||||||||
3 | Pascal (Concerned) | AI writes AI | 9.00% | 75.00% | 10.00% | 6.0000% | 1.5000% | 8.6E-04 | 0.6557% | ||||||||||||
4 | Ash (Skeptical) | AI writes AI | 0.10% | 10.00% | 0.12% | 0.0978% | 0.0040% | 9.1E-07 | 0.0266% | ||||||||||||
5 | Zoe (Concerned) | Power-seeking | 21.00% | 10.00% | 18.00% | 21.3333% | 0.6000% | 1.4E-04 | 0.0606% | ||||||||||||
6 | Blake (Skeptical) | Power-seeking | 0.20% | 10.00% | 0.22% | 0.1978% | 0.0040% | 4.7E-07 | 0.0075% | ||||||||||||
7 | Alice | ARC Evals (Example) | 20.00% | 35.00% | 28.00% | 15.6923% | 5.6000% | 4.5E-03 | 2.0829% | ||||||||||||
8 | Alice | AGI (Example) | 60.00% | 99.00% | 60.00% | 60.0000% | 0.0000% | 0.0E+00 | 0.0000% | ||||||||||||
9 | Bob | AGI (Example) | 1.00% | 1.00% | 60.00% | 0.4040% | 1.1800% | 1.0E-02 | 41.4879% | ||||||||||||
10 | Riley (Concerned) | ARC Evals | 30.00% | 55.00% | 35.00% | 23.8889% | 5.5000% | 3.2E-03 | 1.2049% | ||||||||||||
11 | Flint (Skeptical) | ARC Evals | 1.10% | 1.00% | 1.30% | 1.0980% | 0.0040% | 7.6E-07 | 0.0029% | ||||||||||||
12 | Digby | Update to 100% | 5.00% | 5.00% | 100.00% | 0.0000% | 9.5000% | 8.6E-02 | 100.0000% | ||||||||||||
13 | Concerned | Equivalence Example | 25.0000% | 20.0000% | 6.1000% | 29.7250% | 7.5600% | 1.3E-02 | 5.2602% | ||||||||||||
14 | Skeptic | Equivalence Example | 1.0000% | 20.0000% | 3.3700% | 0.4075% | 0.9480% | 2.3E-03 | 9.5235% | ||||||||||||
15 | 0.0000% | 0.0000% | #DIV/0! | #DIV/0! | |||||||||||||||||
16 | |||||||||||||||||||||
17 | |||||||||||||||||||||
18 | |||||||||||||||||||||
19 | |||||||||||||||||||||
20 | |||||||||||||||||||||
21 | |||||||||||||||||||||
22 | |||||||||||||||||||||
23 | |||||||||||||||||||||
24 | |||||||||||||||||||||
25 | |||||||||||||||||||||
26 | |||||||||||||||||||||
27 | |||||||||||||||||||||
28 | |||||||||||||||||||||
29 | |||||||||||||||||||||
30 | |||||||||||||||||||||
31 | |||||||||||||||||||||
32 | |||||||||||||||||||||
33 | |||||||||||||||||||||
34 | |||||||||||||||||||||
35 | |||||||||||||||||||||
36 | |||||||||||||||||||||
37 | |||||||||||||||||||||
38 | |||||||||||||||||||||
39 | |||||||||||||||||||||
40 | |||||||||||||||||||||
41 | |||||||||||||||||||||
42 | |||||||||||||||||||||
43 | |||||||||||||||||||||
44 | |||||||||||||||||||||
45 |