Modern Approaches to Predictive Modeling and Covariable Adjustment in Randomized Clinical Trials
Frank E Harrell Jr
University of Virginia
Audience: Statisticians and persons from other quantitative disciplines who are interested in multivariable regression analysis of univariate responses, in developing, validating, and graphically describing multivariable predictive models, and in covariable adjustment in randomized clinical trials . The course will be of particular interest to:
Prerequisites: A good general knowledge of statistical estimation and inference methods and a good command of ordinary linear regression. Participants are encouraged to read references [1, 2, 4] in advance. Those interested in covariable adjustment in randomized clinical trials may also want to read [3].
Course Description: The first part of the course presents the following elements of multivariable predictive modeling for a single response variable: using regression splines to relax linearity assumptions, perils of variable selection and overfitting, where to spend degrees of freedom, shrinkage, imputation of missing data, data reduction, and interaction surfaces. Then a default overall modeling strategy will be described. This is followed by methods for graphically understanding models (e.g., using nomograms) and using re-sampling to estimate a model's likely performance on new data. Then the freely available S-Plus Design library will be overviewed. Design facilitates most of the steps of the modeling process. A case study exploring voting tendencies over U.S. counties in the 1992 presidential election will be presented. This short course will survey the advantages of modeling in randomized trials and will provide some guidance in developing a prospective statistical plan for use in a Phase III clinical trial. The methods covered in this course will apply to almost any regression model, including ordinary least squares, logistic regression models, and survival models.
  1. Be familiar with modern methods for fitting multivariable regression models:
    1. accurately
    2. in a way the sample size will allow, without overfitting
    3. uncovering complex non-linear or non-additive relationships
    4. testing for and quantifying the association between one or more predictors and the response, with possible adjustment for other factors
  2. Be able to validate models for predictive accuracy and to detect overfitting
  3. Be able to interpret fitted models using both parameter estimates and graphics
  4. Be able to critique the literature to detect models that are likely to be unreliable
  5. Understand benefits of covariable adjustment in randomized studies
  1. Planning for Modeling
  2. Covariable Adjustment in Randomized Clinical Trials
    1. Gaining efficiency
    2. Reducing bias even with perfect balance
  3. Notation for Regression Models
  4. Interpreting Model Parameters
    1. Nominal Predictors
    2. Interactions
  5. Relaxing Linearity Assumption for Continuous Predictors
    1. Simple Nonlinear Terms
    2. Splines for Estimating Shape of Regression Function and Determining Predictor Transformations
    3. Cubic Spline Functions
    4. Restricted Cubic Splines
    5. Nonparametric regression
    6. Advantages of Splines over Other Methods
  6. Tests of Association
  7. Assessment of Model Fit
    1. Regression Assumptions
    2. Modeling and Testing Interactions
  8. Missing Data
    1. Types of Missingness
    2. Understanding Patterns of Missing Values
    3. Problems with Simple Alternatives to Imputation
    4. Strategies for Developing Imputation Algorithms
    5. Single Conditional Mean Imputation
    6. Multiple Imputation
    7. S-Plus Software for Fitting Models and Adjusting Variances for Multiple Imputation
  9. Multivariable Modeling Strategy
    1. Pre-Specification of Predictor Complexity
    2. Variable Selection
    3. Overfitting and Limits on Number of Predictors
    4. Shrinkage
    5. Data Reduction
  10. Resampling, Validating, Describing, and Simplifying the Model
    1. The Bootstrap
    2. Model Validation
    3. Graphically Describing the Fitted Model
    4. Simplifying the Model by Approximating It
  11. S-Plus Design library
  12. Case Study using Least Squares Multiple Regression: Voting Patterns in U.S. Counties
Instructor: Dr. Harrell is Professor of Biostatistics and Statistics and Chief of the Division of Biostatistics and Epidemiology, Department of Health Evaluation Sciences, University of Virginia School of Medicine, Charlottesville. He received his Ph.D. in biostatistics from the University of North Carolina, Chapel Hill in 1979, where he studied under P.K. Sen. Dr. Harrell has been involved in statistical computing since 1969 and is the author of many S-Plus functions and SAS procedures. Since 1973 he has been involved in medical applications of statistics, especially in the area of survival analysis and clinical prediction modeling. He is an editorial consultant for the Journal of Clinical Epidemiology, is on the editorial board of Statistics in Medicine, is co-managing editor of the new journal Health Services and Outcomes Research Methodology, and is a consultant to FDA.
Handouts: Participants will receive copies of the 185 slides that will be presented .
Regression models are frequently used to develop diagnostic, prognostic, and health resource utilization models in clinical, health services, outcomes, pharmacoeconomic, and epidemiologic research, and in a multitude of non-health-related areas. Regression models are also used to adjust for patient heterogeneity in randomized clinical trials, to obtain tests that are more powerful and valid than unadjusted treatment comparisons. Models must be flexible enough to fit nonlinear and non-additive relationships, but unless the sample size is enormous, the approach to modeling must avoid common problems with data mining or data dredging that result in overfitting and a failure of the predictive model to validate on new subjects. All standard regression models have assumptions that must be verified for the model to have power to test hypotheses and for it to be able to predict accurately. Of the principal assumptions (linearity, additivity, distributional), this short course will emphasize methods for assessing and satisfying the first two. Practical but powerful tools are presented for validating model assumptions and presenting model results. This course provides methods for estimating the shape of the relationship between predictors and response using the widely applicable method of augmenting the design matrix using restricted cubic splines.


F. E. Harrell, K. L. Lee, and D. B. Mark. Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in Medicine, 15:361--387, 1996.

F. E. Harrell, P. A. Margolis, S. Gove, K. E. Mason, E. K. Mulholland, D. Lehmann, L. Muhe, S. Gatchalian, and H. F. Eichenwald. Development of a clinical prediction model for an ordinal outcome: The World Health Organization ARI Multicentre Study of clinical signs and etiologic agents of pneumonia, sepsis, and meningitis in young infants. Statistics in Medicine, 17:909--944, 1998.

W. W. Hauck, S. Anderson, and S. M. Marcus. Should we adjust for covariates in nonlinear regression analyses of randomized trials? Controlled Clinical Trials, 19:249--256, 1998.

A. Spanos, F. E. Harrell, and D. T. Durack. Differential diagnosis of acute meningitis: An analysis of the predictive value of initial observations. Journal of the American Medical Association, 262:2700--2707, 1989.