Report Reader Checklist: Transparency

Transparency
Transparency

It is important for research reports to be as transparent as possible, not only so you understand the study, but also so you can adequately evaluate the quality of a research study. Further, it is an ethical research practice to provide as much detail about a study so that others could easily replicate the study or perform their own data analysis and/or coding of the data. Reports that are fully transparent include data tables and research instruments (surveys, interview protocols, where possible). If these elements are missing, your ability to fully verify results and evaluate the study is diminished.

 

If any of the following are missing, you will not be able to evaluate the authors’ results and recommendations.

a. Raw quantitative data (i.e., tables of frequency counts) for the entire study are included somewhere in the report, or in an appendix, for you to reference.

If a report includes quantitative analysis, it is a best practice to include the frequency tables for you to reference. This allows you to verify the study findings or to refer to the original data if they have questions about how a particular result was calculated. Providing the data tables also allows other researchers to build off of the original dataset. If data tables are not provided, they should provide an explanation as to why they are not included.

b. The instrument and/or study protocol are provided in the report or as an appendix.

In order to fully understand the scope of a research design, it can be helpful to view any instruments, such as surveys, interview protocols or focus group questions. By providing the instrument and/or study protocols, the report authors allow you to see the full picture of what was asked of research participants, as well as how the questions were worded and in what order they were asked. This level of transparency also allows you to see if any data gathered for particular questions are not being reported on in the study findings (if this happens, the rationale should be explained by the report authors).

c. The authors are clear about any conflicts of interest or other motivations for their role in the study.

Sometimes studies are conducted in whole, or in part, by vendors or companies, or as part of market research campaigns. When this is the case, and the vendor or company may be perceived as having a conflict of interest regarding the study results, it should be made clear in the study report what role the vendor and/or company played in funding the study, the study design, implementation, data analysis, report writing and dissemination.

d. Any commentary or discussion is rooted in data results or study findings shared within the report.

In the discussion of the findings, authors should not make claims that are beyond the scope of the data presented in the report. For example, the authors should not make inferences from the data that are not supported by the study results.

Examples

a. Raw quantitative data (i.e., tables of frequency counts) for the entire study are included somewhere in the report, or in an appendix, for you to reference.

Linder, K. (2016). Student uses and perceptions of closed captions and transcripts: Results from a national study. Corvallis, OR: Oregon State University Ecampus Research Unit. [link]
  • See pages 42-50 for tables of frequency counts for the entire study.

b. The instrument and/or study protocol are provided in the report or as an appendix.

Magda, A. J., & Buban, J. (2018). The state of innovation in higher education: A survey of academic administrators. Louisville, KY: The Learning House, Inc. [link]
  • See pages 33-40 for appendices that provide the survey questions and qualitative interview questions used in the study.

c. The authors are clear about any conflicts of interest or other motivations for their role in the study.

Setser, B. & Morris, H. (2015). Building a culture of innovation in higher education: Design & practice for leaders. Louisville, CO: EDUCAUSE. [link]
  • See pages 53-56 for a description of the researchers and their interests and affiliations.

d. Any commentary or discussion is rooted in data results or study findings shared within the report.

Jaschik, S. & Lederman, D. (2018). 2018 Survey of faculty attitudes on technology. Washington, DC: Inside Higher Ed. [link]
  • This report does a good job of limiting discussion to each finding presented in results section.

What are theoretical frameworks?

In research, theories are explanations for the kinds of associations that researchers expect to find in a study. These explanations are based on prior research and understanding of the topic. For example, past research may have suggested that class attendance and higher grades tend to go hand in hand. A researcher might look at this work and decide to do a study to see if giving incentives for class attendance could improve grades. This past research and understanding regarding attendance and grades can provide an explanation for what the researcher expects to happen in their study (e.g., “Since students who attend class often get higher grades, I expect that incentives for attendance will lead to higher attendance, and therefore, improve grades.”). Theories can also provide explanations for the expected association among variables or concepts. For example, students who attend class often may receive higher grades because they interact more with course content by participating in discussions and class exercises. In research, theories are more than guesses that the researchers make personally, but are based on (or situated in) past work and explain why certain patterns and associations may exist in the study.

What are qualitative research methodologies?

Qualitative research methodology generally refers to open-ended approaches that do not include numbers or statistics. Some common qualitative methodologies include open-ended survey items (e.g., “Describe some challenges you encountered when beginning your online program.”), one-on-one interviews, focus groups and participant observation. In qualitative methodology, the data collected includes participant responses (e.g., what they said in response to the interview or survey questions) as well as researcher notes (e.g., notes on behavior the researcher observed). Researchers analyze the data by identifying common themes and patterns in participant responses/behavior. While qualitative methodologies are often used with small sample sizes, they can provide descriptions of phenomena that are in-depth and specific to context.

What are quantitative research methodologies?

Quantitative research methodology generally refers to closed-ended approaches that seek to collect numeric data and often involve statistics. Some common ways to collect quantitative data include closed-ended survey items (e.g., “Rate your satisfaction with your educational experience on a scale from 1 (very dissatisfied) to 10 (very satisfied).”), assessments (e.g., percentage of questions answered correctly on an exam), and frequencies (e.g., number of times students viewed an instructional video). Since quantitative data includes numbers, statistics can be used to answer questions that describe single variables (e.g., “On average, how satisfied are students with their courses?”), differences between groups (e.g., differences in satisfaction between traditional and nontraditional students), and identify relationships between variables (“How does satisfaction relate to exam scores?”). Quantitative methodologies often allow for large sample sizes and can provide a big picture of tendencies and overall associations.

What are mixed methodologies?

Research that utilizes mixed methodologies, or “mixed methods” research, uses a combination of qualitative and quantitative methods to investigate the research question(s). By using mixed methods, researchers can take advantage of what both qualitative and quantitative methods have to offer. A report that uses mixed methodologies will include a description of coding processes and themes (qualitative) as well as a description of statistical analyses completed and results of statistical tests (quantitative). Some studies will use the same participants for both the qualitative and quantitative components (e.g., Thirty participants completed a quantitative questionnaire and qualitative interview.), while other studies may use different groups of participants for each component (e.g., Thirty participants completed a quantitative questionnaire, while eight participants completed qualitative interviews.).

What is validity?

Validity in research refers to whether the study is measuring what is meant to be measured. For example, if a researcher wants to measure academic success, they need to a) define what academic success is, and b) find a way to accurately measure participants’ academic success levels. If a researcher were to define academic success as “performing well in required academic courses,” and then measure participants’ GPA, it would be important for researchers and readers (including you) to evaluate whether GPA measured what the researchers intended to study. A research report should describe how data was collected so you can evaluate whether the study accurately measured what was intended.

What is the difference between a population and a sample?

A population is the entire group of people that researchers hope a study can apply to. For example, if a study intends to inform learning in online higher education, then the population for that study would include all online learners in higher education (across institutions). Typically, it is nearly impossible to recruit everyone from a population to participate in a study. Researchers usually recruit a smaller number of participants within the population’s limits. For example, researchers might recruit 50 online learners from three different institutions in higher education to participate in the example study mentioned earlier. This set of participants that is recruited for the study is called a sample. It is important for research reports to describe the sample (participants in the study) so that you can see how the sample of a study differs from the general population. For example, did the participants of a study come from one institution? Is the race/ethnicity of the participants in the sample similar to that of the population? Often researchers seek to recruit a sample that is representative (or closely resembles) the population so that the results can more accurately apply to everyone in the population.

What is generalizability in research?

In research, generalizability describes whether research findings can apply to a broader population. For example, a study may have been conducted using a sample of 30 undergraduate students. While the findings of that study will apply to those 30 students, it is likely that the researchers hope the findings can apply to a larger population. For example, they may want to apply the findings to all undergraduate students at a particular university, or undergraduate students in general. Research is considered generalizable if the findings can be applied to a broader population than the sample used for the study.

Information about a study’s participants, methods and limitations of the research can help you evaluate whether findings may generalize to broader populations. For example, if a report identifies that their entire sample of participants were recruited from one university, that information can identify a possible limitation to generalizing the findings to university students in general.

What is data visualization?

Data visualization is the method researchers will use to show the data and study’s results. For example, researchers might include a bar graph that compares academic success for different student groups or a line graph of students’ GPA over time. Data visualization should make it easier for you to see the story that the data or research findings tell. Good data visualization should be easy to understand and should complement and add to the descriptions in the text.

What is conflict of interest?

It is important for researchers to do their best to be neutral in the research process. In other words, researchers should not be too invested in any particular outcome of a study. A conflict of interest occurs when a researcher is involved in something that could lead to bias in the research process. For example, if an individual does research at a university and consulting at a company, there would be a conflict of interest if the company they consult for were to fund their university research. While there are steps that can be made to reduce bias if there is a conflict of interest (such as the researcher asking a collaborator to handle the data), it is important for this kind of information to be included in research reports.