The bootstrap is a powerful tool for understanding the properties of statistical estimates. It allows statisticians to easily access estimates of bias, standard errors, and coverage probabilities under some fairly generally conditions. Along with easy access to these otherwise intractable quantities has come some carte blanche application. Many statisticians and lay persons believe that the classic estimates of bootstrap standard error and estimated bias are worry-free solutions which do not necessarily require a carefully considered application. This seminar will focus on changing the common perception of the bootstrap from a tool used to calculate properties of a sampling distribution into a view of the bootstrap as a methodology for understanding a particular statistical analysis. This seminar will demonstrate how bootstrap, double bootstrap and some modifications thereof can be used to check assumptions of a model, check the bootstrap's own assumptions, and expand the class of problems to which the bootstrap is applicable. Ultimately, the application of this approach promotes a better understanding of a particular data set and suggests a method for achieving potentially more reliable estimates from the bootstrap.