As discussed at the 2017 TRB Annual Meeting
.
Forecasters : improved processes; better reputation; more support from data
Previous work on before/after studies conducted by FTA showed that almost all models have flaws, are inaccurate, and unreliable. However, this finding must be weighed against the lack of data for validation.
Objectives
Develop a set of best practices for before/after studies and the ongoing programs that support them. There are likely already some good examples here [ i.e. the World Bank? ] Types of practices could include:
Most likely, best practices will differ based on the type of study or needs placed on specific forecasts.
Consideration should also be given to the variety of forecasts that are often produced behind the scenes to develop the project. These often involve multiple sensitivity tests of potential scenarios and are not always published since most required documentation requests a single point forecast or a specified set of inputs that may not be reasonable.
Types of performance measures:
Conflicts of interest should be appropriately considered. For this reason, academics or other third parties may be the best “evaluators”
Could be used to test the best practices.
Evaluate a specific “shovel-ready project” like a managed facility.
There are probably a lot of potential funders and interested parties that could be willing to pay for such an effort, which should push this up the priority list.
However, these studes are not inexpensive and it could be difficult to motivate an agency that has already “closed its books” on a specific project to pay for something additional.