The Law of Likelihood explains that the strength of statistical evidence in data is measured by likelihood ratios, not p-values or posterior probabilities. I have argued elsewhere that the Law of Likelihood represents a theory of evidence and that such a theory is conspicuously absent from modern statistical methodology. But in this talk, I ignore these philosophical arguments and present a pragmatic case for the Law of Likelihood: likelihood ratios are more flexible, more efficient, and more accurate than traditional hypothesis testing methods or Bayesian alternatives.
In particular, I will focus on the frequency of observing misleading evidence, which is naturally bounded and controllable. For example, using a likelihood ratio as a measure of evidence minimizes the overall probability of making an error (either type I or type II), even in situations with multiple endpoints where the type I error is adjusted to avoid inflation. Likelihood ratios also remain seldom misleading even when the study is (statistically) rigged to produce evidence favoring the pet hypothesis over the true hypothesis. I illustrate these theoretical advances in some simple examples and with a real-world example of a study design where the primary endpoint is the time to an event.