You are here: Vanderbilt Biostatistics Wiki>Archive Web>Projects>SportsMed>SportsMedMeetingNotes (27 Nov 2017, DalePlummer)EditAttach

- 2012 April 13 Frank, Warren, JoAnn
- 2012 March 5 Frank, Warren, JoAnn
- 2011 December 5 Frank, Warren, JoAnn
- 2011 September 12 Frank, Warren, JoAnn
- 2011 August 26
- 2011 August 8 Frank, Warren, JoAnn
- 2011 July 29
- 2011 July 18 Frank, Warren, JoAnn, R
- 2011 May 2 JoAnn, Warren, Frank
- 2011 Mar 18 JoAnn and Warren
- 2011 Mar 04
- 2011 Feb 25
- 2011 Feb 11
- 2011 Feb 04
- 2011 Jan 28
- 2011 Jan 14
- 2011 Jan 01
- 2010 Dec 17
- 2010 Dec 10
- 2010 Dec 03
- 2010 Nov 19
- 2010 Nov 12

- It's a good idea to write up the propensity model for graft type in primary ACL reconstruction for publication in an orthopedic journal.
- JoAnn to write up methods and results from propensity model.
- Take surgeon out of the propensity model and look at the change in the other variables.
- For SF36 paper: Frank thinks we should include some of the internal model validation results in the manuscript, maybe in an online supplement. Include R^2 and Dxy. Increase bootstrap reps to 300.
- The mental and physical component score models have good validation results. Some of the other models have a bit of a drop. However, the other models also had better R^2 and Dxy to begin with.
- Frank is surprised by a lower signal to noise ratio than he expected in these models.
- Frank is concerned that the validate and calibrate functions might have a bug for proportional odds models. Try re-fitting one of the models with (1) the baseline version of the outcome only and (2) a model with about 20 degrees of freedom selected from clinical knowledge rather than the p values from the full models. Then run the validate and calibrate functions on the new models and compare.
- For the mental and physical component scores, there are hundreds of unique values of the outcome. Try rounding the outcome to the nearest whole number. If there are at least 100 unique values, re-run. This will help things run faster.

- We looked at the QALY output (descriptives.pdf). Frank likes the spaghetti plots and the lowess smoothed curve with grayed data. He noted that if we do any modeling with this measurement, we need to include all patients. (We are currently only calculating this for those with values of sf6d at both t2 and t6).
- Frank recommends that we present the QALY in the paper using a function of his in Hmisc called curveRep. It finds representative curves. In this case, he recommends using 4 or 5 equally-spaced points.
- We discussed how to present the multitude of output we have on the 8 sf36 domains. Frank suggested we have a plot similar to plot(anova(mod)), but for each explanatory variable in the model, we have a point plotted for each of the 8 outcomes, plus for the 2 summary component scores (mcs and pcs), with each outcome represented by a different plotting character. The quantity plotted is the chi-square statistic minus the degrees of freedom. This plot would require a little programming. Frank wants me to send him the code after I do it.
- Could put all the nomograms in an online supplement.

- We're planning on submitting one paper on sf36 alone, and one paper on koos, ikdc, and marx together. Frank recommends we try to use the same type of model for all the outcomes in one paper.
- Think about using prop odds for all these models. Check prop odds assumption for: sf36, koos, ikdc, and marx by plotting partial residuals. The prop odds model makes far fewer assumptions than the normal model. Prop odds assumes that the logit of the cdf of Y for separate factors are parallel.
- Can also consider quantile regression. It only assumes continuity of the outcome, and makes no other distributional assumptions about the outcome. It allows you to estimate the median (or any percentile). This estimate has less precision than a mean estimate (in the case of normality).
- We will use the raw IKDC score rather than the normalized one.
- Compare the following: ols with cluster adjustment using cluster bootstrap (uses +- 1.96, which is making the assumption that the beta hats are nomally distributed), using Gls and specifying a covariance structure to handle clustering, and using ols or Gls with bootcov, but using the bootstrap estimated confidence intervals, which use a nonparametric percentile method. This doesn't assume normality of the beta hats. You can plot a histogram of the bootstrap estimates to help assess normality of the bootstrap beta estimates
- Is adjusting for surgeon masking the effect of graft type? Remove surgeon from the propensity model and re-run everything. Add the results to the plot of the confidence intervals.
- In the koos or past marx model, make two nomograms: one with ols and one with lrm, and see if they are similar
- Want to look at utility for general health outcomes. Calculate and plot the means of SF6D over time. Compute QALY as the area under the curve below the SF6D over time for each patient. (For now, only calculate these for those patients with both 2 year and 6 year follow up.)

- Remove propensity score from both sets of models and look at the effect on the estimate and CI for graft type. If it is still not significant, we will exclude propensity score.
- Look at the shape of the propensity of allograft for all models
- Look at the partial effects of the propensity model to see if there is one main driver of the propensity score

- Change caucasian to white in create, mixed, and koosmodels.
- Try adding Key() with no arguments after the plot of the interactions to get labels.

- Frank wants to see more residual plots of the koos models. Says the one for the symptoms score is okay. Do the plots on the handout for the Gls case study. He will use these to decide whether to stick with the Gls model for the other 4 koos outcomes or try a transformation or a proportional odds model (or something else?).

- Frank says our correlation estimate from the koos symptom score model is very low. However, the correlation from the pain score is higher. One way we could verify this is to make a scatterplot of a particular outcome at t2 by the corresponding outcome at t6, Frank says we should discuss the correlation in the paper.

- Make one variable with all meniscus treatment options (separately for lateral and medial) so that the comparisons being made make sense. Replace the mm.excat, lm.excat, noTxTear and repair variables with this variable in the koos and sf36 models.
- After that is done, if we still have effect directions that don't make sense, we could look at how well all the other predictors predict the amount of meniscus excised.

- See if the effect of the baseline covariates is different for different follow up times by adding an interaction term for (unsplined) time with all the other baseline covariates. Look at the test for all interactions with time.

- Present: Warren, Suzet, Emily, Laura, Thomas, JoAnn
- Can we make a nomogram with these models?
- JoAnn will draft paragraph for stat methods
- Answer Warren's questions about model output.
- Frank suggested removing the propensity score from the model to see if graft type is significant, and if it is not we can remove the propensity score from the model...
- Plot effects of the lm.excat*lfc.chondcat interaction. (Check the different output of Frank's functions first.)

- We will ask Frank for help with model interpretation
- Fit models for IKDC and KOOS.

- Frank wants me to do some verifying of the ordinal program. Could check by subsetting to only 2 year outcomes, and check that lrm and ordinal give identical output. (This will not be billed to Sports Medicine.)
- For the sf36 domain scores, we will probably go with lrm and then use robcov or bootcov (since the ordinal package takes a long time to run and doesn't converge). JoAnn will try both robcov and bootcovon several models and compare the output. One way to compare them is by taking the ratio of diag(vcov(fit)) for each model fit.
- We will use areg impute to impute any remaining missings.
- We reviewed the current missings for these models after I had already subsetted out rows with no follow up. They are much lower, but several remain for age. Zhouwen and Suzet are checking on them.
- Frank noted a clustering in the physical function outcome. We think this is due to a ceiling effect in the likert-type items in this section.

- Present: Warren, Laura, Suzet, Emily, Zhouwen, JoAnn
- Discussed "missing" records that were not used in the sf36 model, and whether these were missing because the person didn't have follow up for that time point.
- JoAnn will verify that everyone in the analysis data has at least one followup.
- In the descriptives document, JoAnn will show descriptives of only those who have t2 follow up or t6 follow up for the variables that are shown by time point, to help determine how many missing values.
- JoAnn will add to code that makes the previous surgery graft type variables that if a patient had aclrep ONLY (not intacl or extacl) on a particular knee, that they will have "non applicable" recorded for the previous surgery graft type for that knee, ignoring the allo or auto marked for that knee.
- JoAnn will add competition level and sport to the outcome models.
- Zhouwen and Suzet are going to check on missing values for age.
- To investigate the surgeon initials "ERR," JoAnn will check the year, time point, and regn for those records.
- JoAnn will add multiple imputation, probably with areg.impute
- JoAnn will add descriptives of all the individual items and summary scores of the koos, IKDC, sf36, sf6d, and marx.
- JoAnn will run a redundancy analysis on these measures.

- Talked about the normalization of the SF36. Frank is not a fan of normalizations in general.
- JoAnn will plot the raw scores by the normalized ones for a specific domain to see if it is a linear transformation.

- Decided to model the eight domain scores instead of the component scores. Can use a proportional odds model and adjust afterward using robcov OR use ordinal regression with a random effect via the ordinal package in R. The mixed effects can handle missing not at random better. (?)
- JoAnn will run
**both**types of models on the physical function domain.

- JoAnn will run
- Need to compare those with T2 only vs. those with T2 and T6 based on the outcomes.
- Discussion about time-dependent covariates. They make inference and interpretation difficult. Frank is worried about including marx activity level as a time-dependent covariate. We will limit marx to baseline only.
- Will try to set up a clinic that Jonathan can attend to discuss this.

- Looked at results from fastbw function we ran to decide which subscores to model. We could just decide to run the summary scores since we saw that R squared = 1.
- Frank approved of our plan for controlling for excision and knee variables. (lm.ex*(lfc + ltp) + mm.ex*(mfc + mtp)) He also is good with our lumping plan.

- Current priority is finalizing the sf36 models.
- JoAnn will add more demographics to the descriptives.
- Decided we will treat mental and physical composite scores and utility score as continuous, and give only graphs for the 8 domain scores. Analysis plan updated.
- Will bring up with Frank the sparse combinations of surgeons and allografts.
- A large number of allograft propensity scores are missing due to missing values of aclrep.rt. This uncovers a bigger issue of whether we think allografts are predicted by previous surgery on other knee, previous surgery on same knee and also a big recoding job. JoAnn will look at ptg.auto and ptg.allo and figure out whether there are enough observations to consider whether a previous autograph came from the patella or hamstring.
- Once given the green light, JoAnn will work on combining the 2002, 2003, and 2007+ data for an allograft propensity model paper.
- Data issues

- Suzete will get it for him, but we will still need to add things because the barcode was T6

- Thomas will re-install teleform on this, which will push it into the file share server, then we will have to migrate the stuff over….and can then do a test run

- PI: Morgan Jones
- Computer ordered by Zhouwen
- We will keep this “on our radar”

- All of the new variables merged in perfectly…all of the variables that had older counter-parts did not
- Suzete will purge some old variables in the t2 source file

- Kurt has a paper he wants written that will mostly include onsite data (for late December/early January); Emily and JoAnn can work in the analysis in the weeks to come

- Use “follow up” for T2 and T6
- Suzete will go through source files and update
- Same variable, different variable names….can we merge them?

- “lclrep” does not exist in Ipak; “oplcl” does not exist in questionnaire….we need to merge these two
- Zhouwen did this in ’02, we need to do it in ’03…but there is no “lclrep” in ‘03
- Zhouwen and Suzete will look into this

- Free & Open Source Wiki
- biostat.mc.vanderbilt.edu
- To register, go to the main page and use your First and Last name; No space in-between and capitalize the first letter of each

- It may be more efficient to edit the Wiki itself instead of uploading file attachments
- We need to think about our formatting; what links should we make?
- It would be great to have something that could track what we are doing; if we want to make a change or ask questions, it could notify everyone; we could respond and it would show who has worked on it, etc. therefore eliminating extra work
- We need to set up notifications (do this through your own wiki page)
- It would be nice if we could have a Nomogram link on the page

Edit | Attach | Print version | History: r24 < r23 < r22 < r21 | Backlinks | View wiki text | Edit wiki text | More topic actions

Topic revision: r24 - 27 Nov 2017, DalePlummer

Copyright © 2013-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding Vanderbilt Biostatistics Wiki? Send feedback

Ideas, requests, problems regarding Vanderbilt Biostatistics Wiki? Send feedback