The graphs below plot realized outcomes against model predictions. Select the type of prediction to evaluate (Win, Top 5, Top 20, or Cut), and also select the point during the tournament at which the predictions were made (by round and front/back 9). See the notes at the bottom of page for more details. These graphs display aggregated data using all the predictions from the live model since early November 2017.

Notes: Recall that the live model predicts, every 5 minutes, the probability for each player of: making cut / top 20 / top 5 / winning. For this analysis, model predictions are grouped into bins (e.g. 'Cut, 30-35%'). For each prediction, we record the outcome (e.g. made cut or missed cut) and for each bin the relevant fraction is calculated (e.g. fraction that made the cut). The goal is for these two quantities to be equal, as indicated by the 45-degree line. The key to this evaluation is a large sample size. We have restricted our graphs here to only include bins with at least 100 predictions. Because our model makes predictions every 5 minutes, the success of certain predictions will be correlated, which can make the sample sizes seem much larger than they really are. For example, in a single tournament, a player that collapses down the stretch to miss the cut will affect all the predictions we made for that player up to that point. This is not a problem, it just means that sample sizes might effectively be smaller than they seem.