I want to draw attention to some scientific work on political punditry. The research tested several hundred political pundits having them make long term predictions on matters closely and not so closely related to their fields of interest. They could choose three options based broadly on probabilities scales.
The psychologist Philip Kaufmann has concluded they performed worse than "dart throwing monkeys" and would have been better off just assigning equal probabilty to each option to all predictions The research found that performance was barley better than random, and that those that were the most famous or most specialised performed the poorest.
This has led to the development of the Good Judgement Project and has significant consequences in both economics and geopolitics.
https://en.wikipedia.org/wiki/Philip_E._Tetlock
They have a website which allow individuals and groups to engage in prediction forecasting on a huge variety of topics across the world - economic, defense and political in nature.
https://www.gjopen.com/questions?filter=featured
Kaufmann states that humans tend to focus on singular big ideas and develop coherent stories based on them which gives them confidence on future forecasting (just look at how experts performed). Also, we develop coherent stories on past events with hindsight which creates the illusion that past events were really predictable all along and this feeds into the delusion of future event forecasting. The reality is that future prediction is dependent on the certainty of events that are usually highly unpredictable (that seems absurdly simple) but everyone still slips into the delusion that they can outperform a monkey with darts once discussion gets going on future events.
.
The psychologist Philip Kaufmann has concluded they performed worse than "dart throwing monkeys" and would have been better off just assigning equal probabilty to each option to all predictions The research found that performance was barley better than random, and that those that were the most famous or most specialised performed the poorest.
This has led to the development of the Good Judgement Project and has significant consequences in both economics and geopolitics.
These findings were reported widely in the media and came to the attention of Intelligence Advanced Research Projects Activity (IARPA) inside the United States intelligence community—a fact that was partly responsible for the 2011 launch of a four-year geopolitical forecasting tournament that engaged tens of thousands of forecasters and drew over one million forecasts across roughly 500 questions of relevance to U.S. national security, broadly defined.
Since 2011, Tetlock and his wife/research partner Barbara Mellers have been co-leaders of the Good Judgment Project (GJP), a research collaborative that emerged as the winner of the IARPA tournament.[3] The original aim of the tournament was to improve geo-political and geo-economic forecasting. Illustrative questions include “What is the chance that a member will withdraw from the European Union by a target date?” or “What is the likelihood of naval clashes claiming over 10 lives in the East China Sea?” or “How likely is the head of state of Venezuela to resign by a target date?” The tournament challenged GJP and its competitors at other academic institutions to come up with innovative methods of recruiting gifted forecasters, methods of training forecasters in basic principles of probabilistic reasoning, methods of forming teams that are more than the sum of their individual parts and methods of developing aggregation algorithms that most effectively distill the wisdom of the crowd.[3][4][5][6][7][8]
Among the more surprising findings from the tournament were:
the degree to which simple training exercises improved the accuracy of probabilistic judgments as measured by Brier scores;[3][4]
the degree to which the best forecasters could learn to distinguish many degrees of uncertainty along the zero to 1.0 probability scale (many more distinctions than the traditional 7-point verbal scale used by the National Intelligence Council);[4][9]
the consistency of the performance of the elite forecasters (superforecasters) across time and categories of questions;[4][5][6]
the power of a log-odds extremizing aggregation algorithm to out-perform competitors;[7][10] and
the apparent ability of GJP to generate probability estimates that were "reportedly 30% better than intelligence officers with access to actual classified information."
Since 2011, Tetlock and his wife/research partner Barbara Mellers have been co-leaders of the Good Judgment Project (GJP), a research collaborative that emerged as the winner of the IARPA tournament.[3] The original aim of the tournament was to improve geo-political and geo-economic forecasting. Illustrative questions include “What is the chance that a member will withdraw from the European Union by a target date?” or “What is the likelihood of naval clashes claiming over 10 lives in the East China Sea?” or “How likely is the head of state of Venezuela to resign by a target date?” The tournament challenged GJP and its competitors at other academic institutions to come up with innovative methods of recruiting gifted forecasters, methods of training forecasters in basic principles of probabilistic reasoning, methods of forming teams that are more than the sum of their individual parts and methods of developing aggregation algorithms that most effectively distill the wisdom of the crowd.[3][4][5][6][7][8]
Among the more surprising findings from the tournament were:
the degree to which simple training exercises improved the accuracy of probabilistic judgments as measured by Brier scores;[3][4]
the degree to which the best forecasters could learn to distinguish many degrees of uncertainty along the zero to 1.0 probability scale (many more distinctions than the traditional 7-point verbal scale used by the National Intelligence Council);[4][9]
the consistency of the performance of the elite forecasters (superforecasters) across time and categories of questions;[4][5][6]
the power of a log-odds extremizing aggregation algorithm to out-perform competitors;[7][10] and
the apparent ability of GJP to generate probability estimates that were "reportedly 30% better than intelligence officers with access to actual classified information."
Tetlock and Mellers[11] see forecasting tournaments as a possible mechanism for helping intelligence agencies escape from blame-game (or accountability) ping-pong in which agencies find themselves whipsawed between clashing critiques that they were either too slow to issue warnings (false negatives such as 9/11) and too fast to issue warnings (false positives). They argue that tournaments are ways of signaling that an organization is committed to playing a pure accuracy game –and generating probability estimates that are as accurate as possible (and not tilting estimates to avoid the most recent “mistake”).
They have a website which allow individuals and groups to engage in prediction forecasting on a huge variety of topics across the world - economic, defense and political in nature.
https://www.gjopen.com/questions?filter=featured
Kaufmann states that humans tend to focus on singular big ideas and develop coherent stories based on them which gives them confidence on future forecasting (just look at how experts performed). Also, we develop coherent stories on past events with hindsight which creates the illusion that past events were really predictable all along and this feeds into the delusion of future event forecasting. The reality is that future prediction is dependent on the certainty of events that are usually highly unpredictable (that seems absurdly simple) but everyone still slips into the delusion that they can outperform a monkey with darts once discussion gets going on future events.
.
Comment