Are Faculty Surveys Biased?

Are Faculty Surveys Biased?

Angie Miller and Amber Dumford — A recently presented paper at the American Educational Research Association used Faculty Survey of Student Engagement (FSSE) data to investigate the impact of social desirability bias on faculty survey responses.  Social desirability bias, or the tendency for survey respondents to provide answers that cast them in a favorable manner, has long been a concern for surveys of sensitive or taboo topics like drug use or sexual behaviors.  However, there is mixed evidence for the presence of social desirability bias in student self-report surveys, including previous research looking at social desirability bias on NSSE (Miller, 2012).  Data from a subsample of faculty at 18 institutions participating in the 2014 FSSE administration suggested that, in general, social desirability bias does not have a major effect on survey responses.  However, faculty responses on the Effective Teaching Practices Scale may be somewhat influenced by social desirability bias.  This may be due to the similarity between the items in this scale and those found on course evaluations.  Recent controversy surrounding bias in course evaluations, as well as their high-stakes nature, might portray these seemingly innocuous items as contentious for some faculty.

While surveys can easily gather large amounts of data, the use of self-reports sometimes leads to concerns about the data quality. To minimize the potential that certain questions will prompt untruthful answers as respondents attempt to provide a socially appropriate response, researchers can examine whether social desirability bias is present in the data. Although encouraging student engagement is not what one might consider a “sensitive” topic, faculty may be aware that answering items in ways that display higher levels of engagement is desired by their institutions and they want to appear to be “good” employees. Therefore, the current study was developed to address the issue of social desirability bias and self-reported engagement behaviors at the faculty level.

For this study, data from the 2014 FSSE administration was used. In addition to the core survey (including FSSE scales and faculty demographics), a sub-sample of 1,574 respondents completed additional experimental items on social desirability (Ray, 1984).  While this was a subset of institutions that participated in FSSE, they were selected by random assignment and the resulting 18 institutions mirrored the overall national landscape when looking at size, Carnegie classification, and control.

A series of ten ordinary least squares (OLS) regression analyses, controlling for certain faculty and institutional characteristics, were conducted. Results from the regression models suggest that in all cases, social desirability bias does not seem to be a major factor in faculty members’ responses to the questions involved in the FSSE scales. For four out of the 10 models, the effect of social desirability is not statistically significant, meaning that social desirability bias is not having an influence on the responses. In the remaining six cases, while there was statistical significance, the sizes of the coefficients suggest that the effects are not practically significant, meaning that a slight influence might be present but is not having a substantive impact on the responses.  Furthermore, the change in explained variance for the models when the social desirability score was entered as the second step was quite small as well, even for the statistically significant models.  This suggests that the other variables in the models are having a much greater influence than that of social desirability.

The only model that had cause for further consideration for bias was the model with Effective Teaching Practices as the outcome variable.  This scale was the most predicted by social desirability scores.  Although still small in magnitude (β = .220), this relationship might be partially explained by the similarity between these items and ones found on course evaluations at many institutions. Faculty might be more likely to over-report how often they do things like “clearly explain course goals and requirements” and “provide prompt and detailed feedback on tests or completed assignments” because when these items are asked of students in the context of course evaluations, there are higher stakes associated with the results. While this is a possible concern for interpreting results from faculty surveys, it should be noted that the practical significance of this connection is low.

For more information, please see the social desirability report in FSSE’s Psychometric Portfolio.

Miller, A.L., & Dumford, A.D.  (2017, April).  Social desirability bias and faculty respondents: Is “good behavior” harming survey results? Paper presented at the Annual Meeting of the American Educational Research Association, San Antonio, Texas.