Methods & Data Notes

  • How the survey was administered:
    • The MLI Survey is administered at least once per year to members of RAND’s American Teachers and the American School Leaders Panels (ATP and ALSP). RAND established these two educator panels in 2014 when new policies such as the Every Student Succeeds Act (ESSA) and Common Core State Standards (CCSS), as well as new forms of assessments and evaluations, sparked the need to understand how these policies affect schools and student achievement. Since their initiation, the panels have continued to grow to better represent the voices of teachers and school leaders. 
    • Each survey took about 30-minutes to complete online. 
    • For more information about the MLI Survey, please visit About the MLI Survey.
       
  • Definition of a complete survey:
    • For the May 2017 Teacher Survey: To be considered a “complete” survey, remain in the data file and receive a weight, a teacher had to complete at least 50% of the core survey. 
    • For the May 2017 School Leader Survey: To be considered a “complete” survey, remain in the data file and receive a weight, a school leader had to complete at least 33% of the core survey. 
       
  • What is included in the Sample Size (N-size) of a survey question?
    • The n-size for a given question is the unweighted number of respondents who were not logically skipped out of a question. The n-size includes cases when a school leader or teacher was asked a question but did not provide a response. 
    • The n-size excludes logical skips, or questions that were not asked of certain school leaders or teachers due to skip logic of the questionnaire (e.g., not asking elementary school school leaders about college preparation).
    • Other than the first module (Your School Assignment) and last module (Demographics), the order in which modules appeared to a respondent was randomized. Consequently, the missingness for respondents who did not reach the end of the survey is not concentrated in any one module.
       
  • Margin of error calculations:
    • The population variance estimate and standard error are computed using the weighted survey responses via a jackknife procedure that incorporates the replication weights. The reported margin of error of the survey sample is subsequently derived using the estimated standard error. 
    • Bento has pre-calculated margins of error for all survey proportions at the national and state levels.  For custom filtered data (i.e. any visualization that has more than one location filter applied, or 1 or more other filters applied), users must choose Options>Calculate Margin of Error to view margin of error information.  This process may take 10 or more minutes. It is strongly recommended that users calculate margins of error for any visualization they intend to use to inform decisions or in work products.
       
  • Calibration weights and replicate weights:
    • In addition to variables that provide responses to survey items, the underlying data includes weighting variables that can be applied to the survey data so that the weighted estimates reflect the national population of school leaders. There is a total of 81 weighting variables including one main weight (“weights”) and a series of 80 replicate weights (weights1, …, weights80). The replicate weights are needed to create accurate variance estimates because of the complex sampling design employed in the MLI survey. The survey design is “complex” in that it is not a simple random sample of teachers/school leaders nationally as many states are oversampled to produce state-level estimates. 
    • Weights are designed to ensure that the weighted sample does not under-or over-represent certain types of teachers or school leaders. Main weights are calculated by first adjusting known sampling probabilities (which differ across states, primarily) for a teacher or school leader's likelihood of responding to the survey. That is, response probabilities of school leaders or teachers are modeled across a wide variety of characteristics, and the main weights are produced by adjoining the estimated response probabilities with the known sampling probabilities. The main weights are then calibrated so that the weighted sample matches the known school leader or teacher population across these characteristics.  Characteristics that factor into this process include descriptors at the individual level (e.g., gender, professional experience, etc.) and school-level (e.g., school size, level, urbanicity, socioeconomic status, etc.). Replicate weights are calculated by removing 1/80th of the sample and repeating the process used to determine main weights on the remaining set of respondents.
       
  • Sample size and warning labels:
    • The ATP sample is designed to be of a size sufficient to facilitate national analyses as well as analyses of prevalent subgroups at the national level (e.g., descriptors of nationwide math teachers). Similarly, the panel is designed to permit analyses of the following geographic areas: AL, AR, CA, CO*, DE, FL, GA, IL, KY, LA, MA, MD, MS, NC, NM, NY (including New York City), OK, SC, TN, TX, VA, WV, and New York City. One may also examine prevalent subgroups within these areas (albeit cautiously due lower precision for smaller groups). The ATP sample is not designed to permit analyses within geographic areas not listed above or other remote subgroups.
    • The ASLP sample is designed to be of sufficient size to facilitate national analyses as well as analyses of prevalent subgroups at the national level (e.g., descriptors of nationwide math teachers).  Similarly, the panel is designed to permit analyzes of the following geographic areas:  AL, AR, CA, FL, GA, IL, KY, LA, MA, MD, MS, NC, NM, NY (including New York City), OK, SC, TN, TX, VA, WV, and New York City.  One may also examine prevalent subgroups within these areas (albeit cautiously due to lower precision for smaller groups). The ASLP sample is not designed to permit analyses within geographic areas not listed above or other remote subgroups.
      Bento displays warning labels when the question sample size of a particular visualization is less than or equal to 50 participants. 
    • Bento suppresses a visualization if the question sample size is less than or equal to 20 participants.
      • Note: the question sample size includes respondents who responded to the question and those who saw but did not respond the question.  It excludes respondents that did not respond due to logical skips.   In the cases where sample sizes varied for sub-question items, the question sample size is equal to the maximum sub-question item sample size.
    • High rate of non-response warning: If 20% or more of respondents who saw a given survey question did not respond to part or all of that question, Bento displays a warning sign to caution viewers when interpreting this particular question’s results.  

For more information on survey methodology, visit RAND's Methodology website

*ATP Colorado estimates are weighted to the full sample of Colorado teachers and use nonresponse adjustments due to the absence of teachers from Jefferson County.