Selecting from the pool of inferences in a few promising or important ones for emphasis and discussion, where both the selection and inference are based on the same data, gives rise to selection bias.
We address this concern in our research by developing statistical methods for selective inference that take care of bias. Our approach addresses it by assuring that the original property of the inference will still hold ‘on the average over the selected.’
The concern about ‘average coverage over the selected’ of confidence intervals is captured by the “False Coverage-statement Rate” (FCR). Same concern about tests of significance amounts to controlling the “False Discovery Rate” (FDR).
Simultaneous inference, requiring the probability of making even a single error to be below the desired level, also address the selection bias, but unless specifically needed may result in overly conservative methods.
We develop methods while relying on the specifics of the selection process: when choosing the significant ones, the estimators that are larger than a threshold, or simply the largest one. At the other extreme we also develop methods appropriate for any selection rule. New tools are needed to address complexly structured problems that involve hierarchical search, or search to support replicability statements once multiple studies are at hand (Heller & Yekutieli, Bogomolov & Heller, Heller & Bogomolov & Benjamini).