You are here: Vanderbilt Biostatistics Wiki>Main Web>Clinics>GcrcCY>ClinicAnalyses>ThursdayClinicNotes2015 (18 Jan 2021, DalePlummer)EditAttach

- Should correlations be compared or should slopes be compared? Analysis of slopes would probably assume that the two GRE scores have the same standard deviation or are calibrated to each other.
- Analysis of differences in slopes: fit a linear model first-year score = b0 + b1*GRE + b2*[new GRE] + b3*GRE*[new GRE]; test of interest is the interaction test (H0:b3=0); [x] is 1 if x is true, 0 otherwise. Test of student outcomes being better in one time period than another when the absolute value of GRE score (from whichever test) is held constant: H0: b2=b3=0. Data are stacked in a tall and thin dataset. Adjusting for intensity of biology coursework in undergrad could be important to do.

- Because time spans are fairly long, one could also put time in the model as a smooth trend. A general approach is a regression spline in time with several knots, three of the knots being close to the GRE transition point. This model would not have [new GRE] in the model.
- Is there any hint that grade inflation has changed over time so as to confound the result?

- Making associations with groundwater ion concentrations vs. mortality rates from heart disease in 99 larger cities. We have used Spearman rho test, but we would like to confirm with biostatisticians.
- Have average income, education which could be covariates
- Some useful methods: loess nonparametric smoother on top of a regular scatterplot, thermometer plots superimposed on US map with two thermometers per city; better: add a 3rd thermometer measuring geology; consider one of Daniel Carr's micrographics
- What about climate? How does one adjust for latitude and longitude in a regression model? Perhaps a tensor spline.
- Can get quick statistical assistance if the lead investigator has a primary appointment in Cardiovascular Medicine (then contact Frank Harrell); or by applying for a voucher from VICTR

- Determining whether patients whom were prescribed with a particular drug for manic-depressive disorder has lower incidence rate/mortality rate from heart diseases. We have extracted patient data from Synthetic Derivative, where we know the number of patients who were prescribed a drug of our interest, and how many among this group has had heart disease. We also have the corresponding numbers from a control patient pool (who were matched by age, gender, and ethnicity). We would like to know what the best way to compare these data and determine whether prescription of the drug does indeed affect the disease incidence. Also interested in associations with a pre-selected SNP.
- Need to worry about confounding due to indication. Need to have very accurate determination of which controls have the disease that causes the drug to be prescribed, and very accurate prescription data.

- A couple of other minor questions related to similar data analysis.

- I am a postdoc in the BRET office and am investigating how GRE scores predict PhD student success (for example the correlation between GRE-Quantitative scores and first year graduate GPA). Specifically, I am trying to compare the predicability of the new GRE to the old GRE. Since my independent variables (GRE scores) and populations differ, I believe I may need to use some type of meta-analysis technique to compare the correlations. Can you help me out with this? I typically run my stats in SPSS, but am open to solutions that use R or Prism.

- To discuss a new study we would like to start. It will be based on a recently completed study on inter-rater reliability for which we will bring the data.
- Two raters will evaluate each subject and decide whether to refer or not. Suggest McNemar 's test or Kappa statistics. Can also calculate sample size based on prevalence, sensitivity, and specificity.
- Apply for VICTR voucher for study design. Suggest $2000.

- Quote to include as part of VICTR biostat voucher request. The voucher will be used for developing and writing a grant statistical analysis plan based on the multiphase optimization strategy which uses a factorial study design. Have informally discussed the proposed study design with Robert Greevy, PhD
- Apply for VICTR voucher to help with study design, sample size and power calculation, and analysis plan. Estimate $5000.

- I am currently working on a clinical research project where I am attempting to create a CXR scoring system which can be used to predict outcomes in patients with acute lung injury. My mentor has previously created and validated a CXR scoring system in a population of patients as a means to predict pulmonary edema by comparing her score to measured lung weights. I have recently applied my new score to that same population of patients. At a glance, it does not appear that the correlation of CXR score to lung weight is much better using my score as compared to hers. However, I am wondering what test I can use to actually test this. Ex: if her r=0.63 and my new r=0.66, how can I tell if that is a significant change?

- Name of project: Paretneral Protein Calculator (PPC)
- Type: Randomized controlled clinical trial, un-blinded
- Help needed: Discussing the primary and secondary outcomes
- Study status: Ongoing data collection

- (1) recommended methods to answer the questions and (2) what other information I need to gather for a power analysis.

- Statistical analysis regarding clinical ethics consult services.

- analyzing data regarding entrepreneurs in Haiti

- I’m a medical student working on a project in ophthalmology. I am in need of some help with my project in planning the data collection process from Synthetic Derivative so that it will be most effective for analysis later by a biostatistician
- Retrospective cohort study looking at risk factors (ventilation, hospital LOS, etc). Total 120 patients and 60 had events. Only baseline factors can be evaluated.

- Validation for dichotomous variables (I tried cronbach’s alpha and had really low co-effcients) and also analyzing tertiles
- A study about baseline nutrition. Ten categories of different food.

- To discuss data analysis/interpretation for a project I am currently working on. The study is a comparative analysis of cytomegalovirus viral loads in whole blood and plasma using 4 different assay methodologies/testing platforms with the goal of understanding the interrelated effects of specimen type, assay methodology, use of different calibrants, and patient specific variables on CMV viral load quantitation. I have collected the viral load data on all testing platforms, as well as select clinical variables, on a cohort of 25 patients and would really appreciate some guidance/assistance in the statistical analysis.

- To discuss a model I developed for predicting functional status of injured older adults (Dr. Maxwell is the PI)

- My mentor and I are interested in creating a predictive/correlative model that could correlate the success of Type 1 Diabetes prevention trials with past data of the same therapeutics used to prevent or reverse T1D in mice. Since I have little experience in data mining from large clinical trials data, I was hoping to go over our thoughts on the model with you all, and see what statistical analyses would be best to use. I have a few papers on the topic that I could use as examples and we could go form there.

- Discuss a strategy for an interrupted time series with control analysis that I’ve been working on in Stata for a quality improvement initiative.

- We have data on an excel spreadsheet and need help analyzing it. The spreadsheet is on patients with a diagnosis of ALL (B and T cell) who have had allogeneic transplant.
- Need biostat help with survival analysis. $2000 voucher is suggested.

- I’d like to evaluate the effect of a quality improvement intervention at the Dayani center on wait times for cardiac rehabilitation appointments after discharge. I have wait time data from 2013 to 2014 and the intervention was implemented throughout 8/2013 to 9/2013. Mean wait times were approximately 16 days during the first six months of 2013 and 11 days during the first six months of 2014, and this difference was significant using a t-test. This t-test analysis is admittedly crude, so I’d like to speak with the biostatisticians about next steps and how these data can best be presented visually. Possibilities include an interrupted time series analysis.

- Discuss sample size calculations for a grant proposal. An excerpt from the grant is attached, which overviews the study design (repeated measures design, with multiple parameter studies) and includes my own sample size calculations (which I would like to discuss and receive feedback on). Aim 1 is basic science, and Aim 2 is a nearly identical experiment but in a clinical population.
- Statistical Analysis & Sample Size. For each parameter sweep, statistics will be computed on outcome measures using a repeated measures analysis of variance (ANOVA), with significance level of 0.05 and Holm-Sidak correction. To account for size differences between subjects all data will be non-dimensionalized prior to statistical analysis, using base units of gravitational acceleration, body mass and leg length (e.g., (Zelik and Kuo, 2010)). Non-dimensionalization will cause discrete independent variables (values listed in Table 1) to become continuous, so we will bin data prior to statistical analysis.
- Paired design, use paired t-test for sample size calculation based on the standard deviation of the difference within subject. If the variation is small, try to use smaller type I error of 0.01. Can also plot power versus effect size.
- Consider multivariable linear model for analysis.

- The project is a clinical trial in which children with cystic fibrosis were given a DHA supplementation pill at two different doses. It was a RCT with a cross-over design that included a placebo arm in addition to two arms for each of the doses of DHA. Blood, urine, and exhaled breath condensate were collected at baseline and after each of the study arms. The blood is analyzed for 20 different plasma fatty acids; the urine and exhaled breath are each analyzed for a metabolite of prostaglandin-E (single value). The goal enrollment was 18 participants, but was powered at 13 participants. We enrolled 17 participants, but 3 participants dropped-out prior to completing the first study arm (only have baseline values for these 3 participants). In addition, 1 participant only completed two of the study arms (placebo & high dose) and 1 participant completed just one arm (low dose). Luckily, these last two participants did different study arms.

- Missing covariates can be imputed but imputation on response variable is not recommended.
- Can do complete cases analysis and then include the other 3 pts for sensitivity analysis.
- Start with the global test on whether any arm is different, if yes, then perform pairwise test.
- Repeated measures design, fit model to adjust for baseline measures
- For multiple comparison, present both adjusted and unadjusted pvalues
- Scatter plot for the triplicate to examine the distribution, either average or median.

- I am in the process of submitting an IRB and application for VICTR funding for my fellow project. I was hoping to meet with someone to discuss sample size and basic study design (I am working on a pilot study in adolescents with Crohn’s Disease).

- I am finalizing a protocol for a non-inferiority trial comparing observation versus surgery for bladder cancer. The primary endpoint is proportion of patients who experience progression of disease at 12 months. We know from previous studies that the risk of progression at one year is about 1%. The largest tolerable margin we would accept is 5% in the observation group before we said that this method is unacceptable. Using a type 1 error rate of 5% and assuming a 10% dropout and a 10% cross over from the observation group to the surgery group, I have calculated that 135 patients will be needed to achieve an 80% power.
- Defining non-inferiority margin: odds ratio vs. relative risk vs. absolute proportion
- Develop utility measures or assign a multi-level rank based on panel discussion, use ordinal scale instead of binary outcome, in order to increase power.
- Apply for $5000 voucher

- Two methods to measure tumor size: US vs. MRI. Want to know which method is better for certain cases.
- Patients characteristics, tumor types are all available. Also know the actual tumor size in most.
- Descriptive statistics and some graphs showing how US performs. Paired t-test, multiple linear regression with adjustment of other confounders (Regress MRI on US).

- The effect of board time on patients outcome like mortality, readmission etc.
- Suggest multivariable logistic regression.
- A VICTR voucher of $2000 is suggested.

- Nick is requesting additional input on an ongoing project.

I am interested in having someone double check my sample size calculation for a randomized trial. Attached is our protocol, but briefly we are studying active surveillance versus surgery for low risk nonmuscle invasive bladder cancer. We estimated that a sample size of 148 eligible randomized patients is required to detect a 20% improvement in event-free survival in the active surveillance group by using a 5%-level one sided log-rank test with 90% power. In this calculation, we assume a 20% withdrawal rate. This study design calls for one-sided testing, since the standard (surgery) would only be affected if the active surveillance approach proved superior to the surgery group in terms of event-free survival. If the active surveillance group is the same as or inferior to the surgery group, surgery will remain the standard of care. Furthermore, on the basis of anecdotal experience, active surveillance is unlikely to result in a higher event rate than surgery, thereby justifying the one-sided approach.

I am involved in the beginning stages of a clinical research project involving palliative care and physical therapy in our palliative care unit. I was hoping to review the study design with someone and get tips on data collection and future analysis plan.

- Currently, the outcome (Likert scales of confidence in providing care) is measured as a pre- and post-survey immediately before and immediately after a teaching/training intervention.
- Think about whether a control group might increase the validity.
- Include another post-measure following in-home practice.
- Hard endpoints could include early discharge or patient QoL.

- Sample size discussion: want the effect size to be smallest detectable difference of clinical significance.
- For this Likert scale, we discussed dichotomizing the outcome to a binary endpoint only for purposes of sample size calculation. This might require less pilot data, but actually getting pilot data is more advisable.
- Here's some example text from PS sample size software based on MADE UP effect sizes: We are planning a study of independent cases and controls with 1 control(s) per case. Prior data indicate that the failure rate among controls is 0.6. If the true failure rate for experimental subjects is 0.8, we will need to study 81 experimental subjects and 81 control subjects to be able to reject the null hypothesis that the failure rates for experimental and control subjects are equal with probability (power) 0.8. The Type I error probability associated with this test of this null hypothesis is 0.05. We will use an uncorrected chi-squared statistic to evaluate this null hypothesis.

- Sometimes pre- and post- groups have a learning effect when responding to surveys. They may anchor their answers to the first survey. This is a potential source of bias: Hawthorne effect, learning fatigue, general time trend.
- To do the analysis of a five point Likert scale, use a Wilcoxon test for the comparison. To power that test, you need to know the distribution of the five survey responses.
- For survey development, consider using some validated surveys from the literature. If the survey is internet (e.g. REDCap) then consider using slider scales instead of a five point Likert, because the data will be more continuous which is statistically more powerful. Possible to do this on a paper form. Not sure whether this slider scale approach has been examined for purposes of phone/in-person interview. Note: you would still analyze this as an ordinal variable.

- Questions on data cleaning using Stata.
- one record per patient with up to 30 medications in columns. There are 8-9 check boxes for types of error per medication: same/omission/dose change. There is source of error which is also recorded as 3-4 check boxes, but there probably shouldn't be overlap (ie, multiple types)
- Reshape the dataset from "wide" to "long" so that there is 1 patient - 1 medicine per row. Then you might reshape it back to patient level after some data manipulation.
- See the Stata "reshape" command for converting from "wide" to "long" and vice versa. It is very helpful to experiment with a small data set to get the feel of how reshape works.
- The Stata "egen" command can be very helpful for summarizing (means, counts, sums, etc.) within groups (e.g. all the records for a single patient). It is also good for indicating if a particular condition exists within groups.
- The egen functions marked "allows by varlist" would be the most relevant here.

I am a Vanderbilt orthopaedic surgery resident and I would like to attend the lunchtime biostats clinic this thursday, July 16th. I am in the process of writing a grant for a project that would need a Markov decision model. I would like to meet with a statistician who would be interested in helping me create the Markov model for the project. The study will be looking at a comparison of the cost-effectiveness of 2 surgical interventions in pediatric orthopaedic cancer surgery.

- Osteosarcoma
- Two types of surgical implants: metal or allograft
- 5-year cumulative mortality 0.3
- Interested in simulation to estimate expected cost; some branches are death, revision surgery, etc.
- Currently tabulating complication risks from the literature (infections, etc.)
- Probabilistic graphical models - Bayesian networks - may be of use; more of a "closed" network than what a Markov sequential process would entail
- Take a look at
`netica`

software; R has`gRain`

and other packages - A sensitivity analysis will be needed to check assumptions about parameters
- Options for biostat suspport: VICTR (general support), Kristin Archer-Swygert, VICC Shared Biostat Resource (Tatsuki Koyama), Chris Fonnesbeck (works with Kristin)
- Talk to Dave Pinson at some point

My name is Jennifer Cunningham, and I recently attended a bio-statistics clinic to discuss a logistic regression to identify factors associated with parental willingness and how those predictors differ by race. We have completed the analysis based on the advice received in the bio-statistics clinic on June 4. We were told we could return to discuss the steps taken to ensure we completed the analysis correctly.

I am a hematology/oncology fellow. I would like to attend a biostat clinic to go over stats for a retrospective review on rates of follow-up after abnormal mammogram to compare rates of follow-up between different groups (insurance status, race/ethnicity, language, etc.).

My name is Sam Rubinstein; I'm a PGY-1 in the department of internal medicine. I'm working on a research project involving the effect of stem cell transplantation on proteinuria in patients with amyloidosis; my PIs are Dr. Frank Cornell (dept of hematology/oncology) and Dr. Langone (dept of nephrology). We've collected and compiled our data, and would like to get funding so that we can have some assistance from a statistician at analyzing the data. I am in the process of writing an application for a $2000 voucher from VICTR, and it looks like the process involves attending a biostatistics clinic to approve the estimate. Would it be possible to arrange for a meeting on Wednesday or Thursday of next week so that we can get this process started?

Chris and I are working on a systematic review of weight bearing after posterior acetabular fractures. Specifically we want to look at Merle d’Aubigne functional scores and complication rates. However, there is very little research that looks directly at this question. Thus, we have compiled every article we can find on posterior acetabular fractures that lists their Merle d'Aubigne scores and weight bearing protocol. Our primary question for you is how can we best interpret this data.

I conducted a study looking at "Factors associated with Parental Willingness of their Adolescents Participation in HPV vaccine clinical trials". Specifically, I need assistance in conducting a logistic or multiple regression to identify factors associated with parental willingness and how those predictors differ by race. My specific aim was the following: To identify parental willingness and factors influencing their willingness of their child’s participation in HPV vaccine clinical trials that may be unique to African American parents as compared to Caucasian parents of adolescent girls aged 9 to 12 years. The survey will be administered to parents in community organizations to demonstrate factors influencing parental willingness of their child’s participation in HPV vaccine clinical trials and how they differ across African American and Caucasian parents using multiple regression analysis. The dependent variable is Parental willingness of their adolescent to participation in HPV vaccine clinical research trials. The independent variables were child's gender, Child's health, Child's insurance, HPV vaccine intent, knowledge of clinical trials (CT) prior to survey, type of CT information, comprehension of CT information, personal experience with CT, parent education level, parent gender, child race, parent race, benefits of HPV vaccine (MeanBenefits),barriers of HPV vaccine (MeanBarriers), knowledge of CT (MeanCRTKnowledge), advantages of CT for child (MeanCRTAdvantages), disadvantages of CT for child (MeanCRTDisadvantages), and trust of medical researchers (MeanTrust).

- Conduct extensive descriptive statistics analysis understanding the relationship between the dependent variable, scales and demographic information;
- Notes from Dandan Liu: treat the dependent variable (scored 1 to 5) as ordinal variables and use proportional odds regression model;
- The primary analysis will investigate the effect of 6 scales on the outcome;
- The secondary analysis will investigate heterogeneous effects by including an interaction term between race and the scale of interests

My project is a clinical project in Pediatric Infectious Diseases. We are doing a matched statistical analysis of family history of cases with a fever syndrome and healthy controls.

- Use Mc-Nemar test for matched case-control study
- For categorical variables, use Bhapkar test

I would to reserve a time for biostat clinic on Thursday, May 28th if possible. Attached is my presentation.

- Notes from Frank Harrell: It is important to have a statistical analysis plan for each study. Does yours have one? It would not be a good idea to do a medial split on age but rather to check for a smooth interaction with age.
- Unlikely that we can modify existing analysis which seems OK, suggest senior author attend clinic to make case for re-analysis.

I want to identify the number of patients I need to include on a tissue microarray in order to determine whether my protein of interest predicts biochemical failure for prostate cancer.

- Prostate cancer tumors identified with three punches sampled to create slides. Outcome is recurrence. Would like to power study to look at one protein, with the ability to expand the study to include other biomarkers. The one protein would be measured as a score from 0-6.
- Able to stratify patients into risk groups using nomograms and medical records (including Gleason score, etc.).
- Have 8000 patients, half have tissue/follow-up. Need to ensure that follow-up is complete to rule out recurrence. Plan to do this via inclusion criteria. However, this cohort may no longer be "representative" because ideally inclusion criteria is applied at time of initial biopsy.
- Outcome will be time to recurrence.

- Suggest nested case-control study: cases are all patients who recur. For each case, select a control from risk set of men who had at least as much recurrence free follow-up. Maximize power this way, doing 1:1 or 2:1 controls:case. * Sample size: Using PS (available for free download on Vanderbilt Wiki or Bill Dupont's website), we did the following sample size calculation: We are planning a study of matched sets of cases and controls with 1 matched control(s) per case. Prior data indicate that the probability of exposure among controls is 0.1 and the correlation coefficient for exposure between matched cases and controls is 0. If the true odds ratio for disease in exposed subjects relative to unexposed subjects is 2, we will need to study 69 case patients with 1 matched control(s) per case to be able to reject the null hypothesis that this odds ratio equals 1 with probability (power) 0.8. The Type I error probability associated with this test of this null hypothesis is 0.05.
- Suggest adjusting for Gleason score (and other known risk factors) in analysis. This would then measure the amount of info the stain gives above and beyond the Gleason score.
- Our power calculation is very best case scenario where you are looking only at the stain by itself. If plan to adjust for other risk factors, odds ratio for stain might be decreased, so then you would need more patients. May want to bump up sample size as much as possible past these sample size calculations.
- Tennessee tumor registry may be helpful source of follow-up.

I am Danielle Kimmel, and I am a postdoc in the chemistry department under David Wright. We are interested in developing a receiver operating characteristic curve for our rapid diagnostic tests for malaria. The end goal is for us to design an experiment to provide this information for any of our rapid diagnostics, but first we wanted to get a grasp of how many patient samples or tests needed to be performed.

I'm a fourth year radiology resident doing a research project with Dr. Manning from VUIIS. I'm going to be submitting a VICTR proposal and need help with the Data Analysis/Sample Size Justification section.

I would like some statistical assistance on selection of the amount of patient's I would need to enroll for a study. My concept is validation of a FitBit 's heart rate against telemetry in patients less than four years old who have cyanotic congenital heart disease and are admitted to the hospital for any reason. Validation of FitBit heart rate data in these patients is the first step with ultimate goal of providing a wearable sensor (i.e. FitBit) for near continuous home monitoring of this patient population in hope of finding associating home monitoring data with outcomes. This may allow for development of predictive software to warn clinical decompensation if near continuous home monitoring may be validated and adopted. Thank you for any assistance you can provide with this request.

- Might focus on mean absolute difference between the new measurement and a gold standard, and a confidence interval for that
- Question of how long to monitor a child e.g. 24h or a few hours
- Next stage is to predict clinical endpoints, which will require a large number of subjects and events unless there is a hard clinical response that is a continuous measurement

- Would like to have a general family satisfaction measure. May use # times language services were used, how often approached the family.
- If base the analysis on frequency of use of services, it is very important to capture the patient load all along the course of the study so that the # services can be normalized for susceptible patient load
- Simplest analysis (but requiring lots of assumptions) is to compare two counts (e.g., Poisson distribution)
- Need to be clear on individual patient/family assessments vs. system assessments

- Retrospective case-control study of novel fusion events to correlate with biochemical recurrence of disease
- Patients 2003-2009 having radical prostatectomy; looking at rising PSA (most happen in first 3y)
- If biochemical recurrence can be measured on a continuous scale, power will be GREATLY increased
- ~10% event rate (novel fusion)
- To calculate sample size, try using software: http://biostat.mc.vanderbilt.edu/wiki/Main/PowerSampleSize
- To do the case-control sample size, you'll need additional information from the pilot data. Might be easier just to do a dichotomous power size and look at the change in sample size as the effect sizes change.

The palliative care research team would like to schedule a biostats clinic in order to better understand the statistical components of a potential survey instrument to assess patient understanding of therapy, prognosis. We would also like input regarding contamination of our control group as the intervention (early palliative care consult) can be considered standard care and thus, control participants might get exposed to the intervention (although not “early”).

- Difficulty in choosing primary endpoint. Something like ventilator-free days?
- Can there be multiple endpoints? Some advantage in having a simpler endpoint that will convince skeptics. Some want a "hard" endpoint.
- Multiple endpoints might be combined using utilities
- Should assessments be made after death or prospectively?
- What is a "safety" endpoint here? Caregiver burden?
- Preference for validated instruments especially if need to combine multiple items; but doesn't hurt to ask a handful of individual questions for qualitative assessment
- There are mixed method approaches
- It may be easy to contaminate the control group, e.g., just by asking them questions
- Post-death assessments may be most unbiased if timed wisely
- Need to time carefully with regard to receipt of hospital/physician bills

- Need to time carefully with regard to receipt of hospital/physician bills

I have written a research proposal, and my question is have I chose the appropriate statistical analysis for the data to be obtained, how many subjects do I need for statistical power, are there any methodology flaws? I have attached the proposal.

- Correlation between body/extremity temperature and blood flow
- Interested in exploratory analysis of surface temperature patterns and several traditional lab measurements
- Some variation in hospital room ambient temperature; may need to accurately capture these temperatures for adjustment
- Are changes in temperature important (central - peripheral) or are absolute temperatures important?
- May do a 20-subject pilot study to find out sources of variation in the measurement due to camera angle etc.
- Typically there is an esophageal or rectal temperature probe for measurement of core temp
- Sample size will need to be larger if the model relating to cardiac output is an empirically-derived model (as opposed to plugging into an existing biomathematical model)
- Sample size based on accurately estimating a linear or Spearman correlation coefficient: see Section 7.5.2 in http://biostat.mc.vanderbilt.edu/wiki/pub/Main/ClinStat/bbr.pdf
- Consider a response feature approach to analyzing the data
- Estimate pilot study sample size based on confidence interval widths.

I’m planning to apply for VICTR funding for a research project I am doing. My research project is a secondary data analysis of the BRAIN-ICU cohort looking at whether executive dysfunction at 3 months after discharge from the ICU is an independent predictor of mental health outcomes.

I would like to present a project on April 23rd regarding a secondary data analysis related to Lean management. The purpose of the project is to determine how lean was implemented in different units and if the implementation approach was associated with the sustainability of Lean in these units.

I’m interested in chatting with an expert about ways in which calibration of a predictive model (e.g. Regression analysis) can be compared between different models. I am comfortable with how to measure discrimination, calibration, and clinical usefulness, but would love to discuss whether a particular metric of calibration is more convincing to an expert audience in choosing between two hypothetical models. Dr Walsh is currently working on a comparison of five hypothetical models for prediction of hospital readmission using four years of training data and one year of validation. In his previous experience, most comparisons discussion discrimination using C statistic and briefly discuss calibration (using plot). He is interested in discussing most rigorous way to compare calibration of models. Which is better? A few discussion points. -- Not recommended to split on time. "Any time you split on a variable that is easy to model, setting self up for failure." -- Literature comparing calibration curves is sparse, but is an area of interest. If can calculate two calibration curves, may be able to bootstrap and obtain CIs. -- May consider Spiegelhalter calibration as a starting point. -- Recent paper on how to estimate calibration curves most accurately (PC Austin, EW Steyerberg Statistics in Medicine)

This is in follow-up to previous meeting and to help ascertain sample size calculation for re-submission of a VICTR proposal. At previous clinic there was a general discussion on identifying the total number of subjects to perform the overall study. This calculated sample size (which is quite large) was considered scientifically worthwhile, but not feasible in the require time window according to recent feedback from VICTR. At this clinic, there was discussion regarding the most appropriate way to propose a pilot study that VICTR would consider both feasible and worthwhile for funding. Prior work in the field is based on smaller sample size (less than 30). For the first calculation, an appropriate level of precision was decided and the required sample size was calculated. It was suggested to look at sample size from the other direction. That is, determine what is a reasonable sample size that can be evaluated in a given time frame. Then, calculate the level of precision that would be obtained using that sample size and justify why that is scientifically relevant. Pick a measurement that are interesting but is difficult or unstable. Make calculations based on that measurement. Recommended to compair of continuous measurements; this will allow sample size for power calculation to be smaller. Plan to return next week to discuss next week.

- A brief overview of the project: We aim to compare the morbidity and mortality of preterm babies with congenital heart disease with preterm babies with no heart disease. We will be looking at the following parameters:birth weight, gestational age, initial APGARs, immediate post natal acid base balance, presence of an antenatal diagnosis, gestational age at birth versus at the time of surgery (analyzing time and reasons for deferring surgery), presence of extracardiac anomalies, other co-morbidities of prematurity such as intraventricular hemorrhage, bronchopulmonary dysplasia, necrotizing enterocolitis, pulmonary hypertension, presence of aspiration/reflux and need for additional surgical procedures, presence of thrombosis, culture proven sepsis, presence of arrhythmias, medications such as ionotropes etc, need for cardio-pulmonary resuscitation, extracorporeal membrane oxygenation support while in the hospital , length of hospital stay, long term follow up and overall outcome. In addition, we aim to compare surgical outcomes in this preterm population with CHD with all infants <1 year who underwent open or closed heart surgery at our institution excluding the patients in this study. This comparison may extend to include recently published national surgical data with a similar cohort. We will also be comparing the incidence of heart defects in preterm babies with the incidence reported in a recent study by Egbe et al, using the Nationwide Inpatient Sample database, to identify any institutional differences.
- Morbidity/mortality of pre-term babies with/without heart defects. Want to estimate outcomes to help advise parents on treatment decisions. Also interested in comparing term and pre-term babies who have open heart surgery.
- Want to determine best approach to set up REDCap database. There is no longitudinal database.
- REDCap has branching logic (i.e. skip patterns) that can be used when not all participants require entry of certain questions.
- Recommend looking at Spreadsheet from Heaven on biostatistics wiki.

- Discussing the analysis now.
- Wish to estimate the incidence of complications among pre-term infants stratified by heart disease. And look at discharge and survival for pre-term vs. non-pre-term babies using Kaplan-Meier estimates.

- Estimating effect of surgical intervention on pre-terms compared with normal pre-terms with no surgical intervention will be difficult to draw inference because babies that require surgery are inherently different than those that do not receive it. Could derive propensity score to indicate whether or not the surgery would be early or late. Sort patients by propensity score and trim data. If there are some patients who should not have delayed intervention then they should not have entered a clinical trial, and are trimmed. Then analyze those patients not trimmed and adjust for propensity score. Need to convince yourself that the propensity score is effectively matching patients across relevant variables. Then create a standard "Table 1" using inverse probability weighting (IPW). If confounders balance across groups then you might proceed.
- Advanced methods including marginal structural models may allow for consideration of counterfactuals, but they are usually used in longitudinal or repeated measure studies.
- Otherwise, for descriptive studies may just acknowledge confounding by indication (i.e. surgery).

- Have about 450/450 cases/controls for study. Recruited cases by inclusion criteria of diarrheal illness and presentation to ED in hospitals. Recruited controls from healthy check-ups at same hospitals and matched for age group.
- Concerns for case/control recruitment might be that they are confounded by socioeconomic status (for example). It is helpful to adjust for these potential confounders (if collected) in multivariable analysis.

- Assuming cases/controls are comparable, using binary outcome (pathogen status) then do multivariable logistic regression with case status as relevant exposure. Consider adjusting for clinic site.
- Analyzing pilot data will likely be an underpowered comparison and makes it difficult to do the full regression model that this study may require. Consider looking at distributions as first step, and also finalizing a Statistical Analysis Plan. Total number of pathogens are ~30, but interested in mostly in ~9 emergent pathogens. Each might be separate models unless you are interested in co-infection. If there are biological reasons to group pathogens, then you could look at coinfection.
- Consider ranking pathogen analysis in order of importance prior to analysis since there are issues of multiple comparison (REFERENCE: Cook RJ, Farewell VT: Multiplicity considerations in the design and analysis of clinical trials. J Roy Statistical Association A 159:93-110; 1996).

- Interested in supporting staff and/or faculty effort to analyze this data. Meridith will put them in touch with Leena Choi of allocation committee.

- Outcomes: 2 measures of pressure (MUCP and ALPP), also measuring Kegel muscle tone as 0-4 scale assigned by clinician.
- Chart review of 550 records (400 complete) from research derivative. Included: Females with urodynamics, complaint of urinary incontinence. Exclude: males, comorbidity.
- Graphical representations of paired relationships (e.g. scatter plots). Spearman's correlation is appropriate; however, the interpretation may be less meaningful than a graphic. Could also do a Spearman correlation for muscle tone scale and pressure measures.
- What are thoughts on binning 0-1 as Kegel weak and 2-4 as Kegel strong? Not recommended.
- Next step may be linear regression or a proportional odds model. Interested in knowing how to predict 1 from 2 for all possible combinations (i.e. three models). If all outcomes are of equal interest. Would assess whether all three tests are needed.
- If you decide to do parametric analysis (like linear regression) then you could conduct some influence analyses to examine the outliers.
- Include interaction terms for the two covariates on outcome.
- If a secondary motivation is to use the noninvasive test (i.e. Kegel) to predict performance on the more intensive test, then maybe leave out the other pressure measure as a covariate (since it is collected as the intensive test).

- How to handle out of range values that are still plausible? Could do proportional odds model for all three outcomes which is more robust to these values.
- Take caution not to try to correlate these outcomes with UI because the sample is biased. Limit analysis/conclusions to correlations within this population.
- Future directions could be to use these measures to look at surgical outcomes.

- Question from Frank: I want to ask you about the phrase "association between the known and unknown significance variants". I assume this should be"significant variants". Please elaborate on "significant". Any analysis that is restricted to statistically "significant" features may be problematic, causing bias and over-interpretation, plus multiplicity problems.
- 90 patients all of whom have metastatic breast cancer. Breast Cancer tumor (targeted XM) sequencing yields variants of known clinical significance and unknown clinical significance (i.e. prognostic, therapeutic, or diagnostic significance). A nonsignificant variant may be a gene that appears rarely. There are a total of ~360 variants.
- What are the differences between variants of known vs. unknown significance? Why are they classified that way? According to lab that assigns this category, this is an expert consensus and chart review.
- Is it worth pursuing germ line data? Have robust phenotype data for breast cancer (pathology, metastatic, lymphnode sites)

- Since there is no clinical out come and wIthout a control group, it's difficult to assign significant variants. Might be worth looking at datasets that have noncancers and cancers in order to determine how genes/variants are different across groups. Or within cancers, how do they separate the tumor subtypes or another clinical situation.
- Interested in looking at associations between "unknown significance variant" and "known significance variant". Distinguish between USV worth pursuing and USV that are not.
- Use logistic regression to predict unknown variant using known variant -- this would have issues of overfitting and correlation among variants.
- Logic regression: variant A and B in known variant, then estimate probability of having unknown variant Q. Regression tree-based method. Package in R exists that is called logicreg.
- Would it be interesting to group k variants associated with an organ or pathway. Automated ways to group gene ontology labels based on metrics of distance between labels. Cluster variants with this gene ontology domain knowledge -- Python package GOgrapher.

- Difficult to distinguish between a common variant and a significant variant.
- Discussed network graphics which show a edge if both variants are observed in same patient, try using a different summary to draw the edge (i.e. pearson correlation or GINI-index). Color lines by positive/negative correlation and width of line proportional to magnitude of correlation. Only draw edges for USV to KSV and ignore all other pairs.

- Our recommendation was nothing more than a 4:1 or 3:1 match of controls to cases. We also recommended Susan attend a REDCap clinic for development of a data base that will allow easy transition from data collection to data analysis.
- Given the scope of the work and that a manuscript is involved, we estimate that it will require about 45 hours of work at most.

- REDCap has some data summary capabilities, but it may be best to export data into an analysis program for graphics and summary statistics.
- Another option is VICTR support: http://biostat.mc.vanderbilt.edu/VICTRBiostatPolicies

- Our project has to do with inappropriate Cdiff testing in the childrens hosptial pre and post an intervention, with 13 months data collected for both timeframes.
- Suggestion to keep age as continuous since it's likely that a 12 month old and 35 month old have different CDIFF testing rates.
- A generalization of the interrupted time series: Model time as a smooth curve/continuous effect with nonlinearity (e.g. restricted cubic splines) to observe the change over time.
- Poisson regression with an offset for the hospital volume or fractions using linear regression. Consider an interaction effect between age and time to explore the intervention effect by age "group". Also, model age as a smooth curve.
- To give a crude answer to this question: "Is the distribution of age of children who receive CDIFF different pre- and post-intervention?" you can use a wilcoxon rank sum test. This does NOT account for seasonality and we suggest the above methods for VICTR support.
- Would like to apply for a VICTR voucher to support this project. Suggest applying for a standard VICTR voucher, time required 35 hours ($2000).

What we have done is treat our animals with either vehicle (saline) or Neuregulin (NRG) and measured cardiac function at various time points. The two function parameters that we measured was Fractional Shortening (FS) and Ejection Fraction (EF). I have attached the raw data file as well as a graphs of the raw FS and EF values. As you will see, in the NRG-treated animals, FS increased during the later stages of the treatment period while in contrast, in the vehicle-treated animals, FS decreased during that same time period. Thus, we have a difference in the FS values between the 2 groups during those treatment stages. The same holds for the EF values as well. What we would like to know is whether at any of those time points, the difference in FS and EF values between the vehicle and NRG-treated animals is statistically significant?

- Looking at raw data for 3 rats - longitudinal trends - some inconsistencies
- Would be good to estimate the confidence intervals for the differences in the two groups, over time
- Analysis of last time point: sig. equal-variance t-test and Wilcoxon 2-sample test
- Full longitudinal analysis would be good (e.g., generalized least squares (growth curves))
- For that, use the baseline variable as the covariate

- Consider having replacement rats when deaths occur during surgery

We are performing a pilot study were we hypothesis that patients with myositis have altered metabolic capacity/ low insulin sensitivity in muscle and low muscle performance compared to matched healthy controls and that endurance exercise improves metabolic capacity/insulin sensitivity and muscle perfromance in patients compared to a non-exercising control group. We use delta glycerol as a surrogate marker for meabolic capacity/insulin sensitivity and muscle performance assessed by cycling time to exhaustion.

In attached power point is how I would like to present my data. Slide 1: compare patients and healthy controls at baseline and lower muscle performance (cycling time to exhaustion) and a scatterplot to display individual values. Slide 2: with-in group comparison baseline vs. follow-up for the exercise group and the non-exercising control group and line-plot to display individual values baseline to follow-up.

- The file is in the conference room computer directory clinicuploads
- Discussed significant information loss and arbitrariness of responder analysis by email
- Also discussed conservatism of Fisher's "exact" test
- Disease activity is determined from several ordinal and continuous variables; consider ranking all these variables across subjects and computing the average rank; analyze average rank vs. treatment group. Collapsing several variables into one.

- Email: When we plot out the %aggregation and genotype we have the following graph (below). It appears that there is a trend for decreased aggregation in the Aspirin treated group with AA versus AG versus GG. When John conducted t-test of aspirin AA versus Aspirin GG the p-value was 0.037. Is there a way to use these findings? Is there a way to assess for trend in the ASA data?
- two continuous measures pre and post along with categorical SNP groups

- I would like to go over a few requested manuscript edits from a statistical editor. The study is a failure mode analysis of a particular medical infusion pump, where we performed benchtop trials with different flow conditions and evaluated pump performance. I've copied the reviewer comments below and bolded the comments that I would like to discuss.

- Assuming that the data is in good shape, it is reasonable to request a $2000 (35 hour) VICTR voucher for support of statistical analysis for 1 manuscript.

- I would like to reserve some time to come for an initial consultation regarding a clinical care project for amputee care. I am planning to apply for VICTR funding soon. I am looking to identify and link meaningful metrics (such as pelvic motion) to amputee walking performance in order to develop a useful tool for clinical assessment.
- There is way to perform non-invasive pelvic motion analysis that affords:
- Cadence
- Speed
- Stride length
- % stride length/height
- Gait cycle duration
- Step length for each lower limb
- Stance phase duration for each lower limb
- Swing phase duration for each lower limb
- Single support duration for each lower limb
- Double support duration
- Pelvic girdle angles (sagittal, coronal, transverse plane rotations) with estimation of side-to-side symmetry
- I am looking to also collect data such as Age, Height, weight, Type of amputation (above or below knee), Type of surgical technique (myoplasty/myodesis), Residual limb length, Time since amputation, Need for prosthetic device revisions (how many / year), Activity level (hours of use per day), Level of activity intensity with device (mild, moderate, high-impact), Device composition (more for reference, e.g. type of foot, liner, knee)

- In having participants perform walking trials, I am looking to establish amputee specific patterns (in means and ranges and clinically significant threshold values of pelvic motion) but also perform covariate analyses to see if specific indices can be developed to establish clinically significant metrics (and overall trends) to help surveil performance and suggest interventions timely.
- Current gold standard is an expensive infrared set-up (instrumented gait motion capture lab), new tech is a sensor on a belt that can measure multiple movements. Want to ascertain whether the data from the tech can be used as a global tool for prosthetic performance. Would like to establish threshold values for "safe", "requires monitoring", "unsafe".
- This would be an exploratory study with multiple levels of intervention. Possible to have 20 people with multiple measures and control over 1 aspect of limb.
- Collecting a lot of data makes sense provided the variety of patients, movements and sensor output.

- Initial aim would be to capture subjective feedback from patients/provider with the sensor output.
- Difficulty sounds like establishing a gold standard. There are many methods for correlating this sensor output with a gold standard.
- A different type of study would be to select patients with specific amputation and generate hypothesis surrounding gait measures that require intervention. Randomize to 2 groups -- 1 with device and 1 without device, then determine clinical outcomes.
- May need to document what constitutes pathologic walking styles.
- First do a descriptive study of groups of individuals and their movements.
- Within subjects are these measure reproducible and reliable.

- Suggest that VICTR be used as mechanism to generate data for larger research study. How many patients? Hard to answer this question without a testable hypothesis.
- Recommend collecting equal numbers of groups (i.e. normal, type of amputation). Simplify the data collection as much as possible. Erring on the side of large amount of data to support exploratory analysis can't hurt if it's feasible.

- Sample size options:
- To officially establish a sample size, we could focus on the precision of the confidence intervals to estimate population means for subgroups.
- If you have20 patients per parameter, you have good predictive ability. Example of this type of justification is in: Arnold, Donald H., Tebeb Gebretsadik, Karel GM Moons, Frank E. Harrell, and Tina V. Hartert. "Development and Internal Validation of a Pediatric Acute Asthma Prediction Rule for Hospitalization." The Journal of Allergy and Clinical Immunology: In Practice (2014). Meridith will forward this PDF along.

- A VICTR voucher of at least 90 hours is recommended. We advise working with the VICTR statistician prior to data collection. If analysis for a given study requires more than 90 hours of work, it will be the PI’s responsibility to provide a center number to the department of Biostatistics for the remainder. Bill Dupont, Yuwei Zhang, and Meridith Blevins were in attendance.

Edit | Attach | Print version | History: r1 | Backlinks | View wiki text | Edit wiki text | More topic actions

Topic revision: r1 - 18 Jan 2021, DalePlummer

Copyright © 2013-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.

Ideas, requests, problems regarding Vanderbilt Biostatistics Wiki? Send feedback

Ideas, requests, problems regarding Vanderbilt Biostatistics Wiki? Send feedback