Skip to main content


Online Learning Efficacy Research Database FAQ

This database is a searchable resource of academic studies on the learning outcomes of online and/or hybrid education in comparison to face-to-face environments. View overview or browse the database.

About the database

What is online learning efficacy research?

Online learning efficacy research explores whether the learning outcomes of online and/or hybrid/blended (online and in-person) education are equivalent to face-to-face environments. Some studies also compare multiple online modalities to one another.

It is common for this research to compare the same classes in one discipline that are being taught across different modalities.

Why was the database created?

Skepticism for online education remains a significant barrier to engaging faculty in online teaching and learning initiatives. We repeatedly receive questions from faculty about whether online environments are equivalent to the face-to-face environment in terms of learning outcomes.

This database is meant to provide faculty, administrators of distance education programs and other relevant stakeholder groups a way to quickly and easily search the literature on the topic of online learning efficacy.

How many citations are currently included in the database?

The database has 194 citations across 73 discrete disciplines from 142 academic journals.

How is this database different from the No Significant Difference database?

There are several ways this database differs from the No Significant Difference database, including date range, outcomes categories, the inclusion of hybrid/blended research and the exclusion of dissertation studies. Each is described in more detail below.

Dates

While the No Significant Difference database includes studies from 1928 through 2013, you will notice that our database only includes studies from 1998 through the present.

Prior to 1998, we found that research publications in distance education were limited to studies that included pre-internet telecourses and effectiveness studies of outdated technologies, such as CD-ROM-based courses that are no longer relevant to many institutions currently offering online education.

Given that the goal of this database is to provide relevant information to faculty and others looking for current research related to online education efficacy, we decided to only include studies in the database post-1997.

Outcomes categories

The No Significant Difference database allows users to search by outcomes of the studies, including “no significant difference,” “better in the classroom,” “better with technology” and “mixed results.”

Our intention is for users to engage with the database and make their own determinations of the outcomes of a study without the bias of an outcome label. In particular, we found the studies in the database vary considerably in terms of study design and rigor, which could significantly impact how a user would perceive the study outcome.

We leave the assessment of the value of each study’s outcomes to individual database users.

Blended/hybrid

When compiling citations for the database, we have included studies that explore the efficacy of hybrid/blended courses (that is, courses that combine face-to-face and online instruction). These studies are only minimally included in the No Significant Difference database.

Dissertations

Dissertations are included in the No Significance Difference database along with all other citations, but they are not currently searchable in our database. When compiling citations for the database, dissertations were often not available for review to make determinations about many of the variables used to search the database entries.

However, because dissertations represent important contributions to the field of online education efficacy research, we have created a downloadable PDF of all of the dissertation citations we have discovered on the topic of online efficacy research.

Why doesn’t the database categorize items by outcome of the study?

Our intention is for users to engage with the database and make their own determinations of the outcomes of a study without the bias of an outcome label. In particular, we found the studies in the database vary considerably in terms of study design and rigor, which could significantly impact how a user would perceive the study outcome.

We leave the assessment of the value of each study’s outcomes to individual database users.

How were each of the variables determined?

Sample size

Sample sizes were retrieved from the text of each published article and include the total N for the study. If a publication reported results from more than one study, we combined the Ns. Some larger scale studies included ranges of sample sizes, and we chose to list the highest number in those ranges. We intend the sample size field to be used as a guide for users to seek out larger or smaller scale studies, as appropriate.

Discipline

The discipline field represents the specific discipline of the course(s) under study in each of the articles. As some of the articles examined outcomes from very specific or highly specialized fields, we did not attempt to collapse disciplines into larger categories.

Education level

This field indicates if the courses being studied were at the undergraduate or graduate level, or both. The database also includes articles from the medical field that often do not fit these categories. For those studies, we categorized courses for medical residents, or those in advanced medical courses, in the category of graduate.

The database also includes many studies conducted outside of the United States. In some of these studies, were were unable to categorize the courses as graduate or undergraduate, and these studies are marked with the label “unknown.”

Modality

Definitions of modality vary, particularly in regard to hybrid/blended courses. Whenever possible, we have defined modality by the terms used by the study author. Database entries are categorized into four main categories with the following definitions:

  1. Traditional, or courses that are primarily taught face-to-face
  2. Web-facilitated, or courses that integrate technology, but not in a way that replaces face-to-face instruction
  3. Hybrid/blended, or courses that integrate technology for the purpose of replacing some, but not all, face-to-face instruction
  4. Fully online, or courses that are primarily taught in the online modality.

In cases where terminology such as “e-learning” is used by the author, we have determined the closest category using the definitions described above.

Peer review

For each citation in the database, the Ulrichsweb tool was utilized to determine whether the journal where the study was published was peer reviewed.

Date

The date field includes the year that the article was published, as determined by Google Scholar.

Publication

This field shows the full name of the journal in which the article was published. Two journals that we know of underwent a name change within the database date range.

  • The International Review of Research in Open and Distributed Learning became The International Review of Research in Open and Distance Learning
  • Journal of Asynchronous Learning Networks (JALN) became the Journal of Online Learning, which then became the Journal of Online Learning and Teaching (JOLT)

Abstract

This field contains the abstract that is publicly available via Google Scholar or an individual journal’s website.

What citation style is the database using?

This database uses the APA citation style from the Publication Manual of the American Psychological Association (6th edition).

Database scope

What was the search process to find articles to add to the database?

The search for database entries began with a review of the No Significant Difference database and meta-studies focusing on online efficacy research post-2010.

Following this initial exploration, a systematic search was conducted via Google Scholar using the following terms for the date range 1998-2018:

distance education efficacy
e-learning outcomes
hybrid blended outcomes
hybrid blended comparison
hybrid blended equivalency
online course equivalency
online face to face comparison
online face to face courses
online face to face equivalen*
online learning efficacy
virtual blended hybrid
virtual face to face learning

As a final step, a citation review of the bibliographies of meta-studies prior to 2010 was also completed to ensure the comprehensiveness of the database.

Google Scholar was chosen as the main search engine because it combines citations from journal indexes, institutional repositories, such as Digital Commons, and from more general websites using search bots. Learn more about Google Scholar’s inclusion guidelines.

What are the inclusion criteria for the database?

To qualify for inclusion in the database, a research study design needed to include the following criteria:

  1. A comparison of instructional modalities including face-to-face, fully online, hybrid/blended or web-facilitated. (In some studies, online modalities, such as hybrid/blended and fully online, were compared with one another.) Studies that did not include at least one modality comparison were excluded.
  2. Measurement of at least one student performance outcome, such as exam scores, assignment scores, course grades, GPA or another outcome. Studies that only measured student perceptions or satisfaction were excluded from the database.

Why doesn’t the database include quality indicators?

The quality of a study design, depending on the context and needs of the audience, can be subjective. We have chosen to leave the determination of study quality up to individual database users.

Why doesn’t the database include full-text documents or PDFs?

Because many of the database entries are protected by copyright of the journals they are published in, we have only included the publicly available abstracts in the database.

If you would like to read any of the articles linked in the database, we recommend checking your local academic library to find copies of the articles of interest to you.

Why doesn’t the database include dissertations?

When compiling citations for the database, dissertations were often not available for review to make determinations about whether they met the criteria for database inclusion.

However, because dissertations represent important contributions to the field of online education efficacy research, we have created a downloadable PDF of all of the dissertation citations we have discovered on the topic of online efficacy research.

We also welcome recommendations of dissertations to be added to our list. Submit a missing citation using the feedback button on the right sidebar or by emailing the Ecampus Research Unit staff directly at ecresearchunit@oregonstate.edu.

Why doesn’t the database include meta-studies?

Because meta-studies, by their nature, are syntheses of a wide range of studies from across disciplines, we found it difficult to make the meta-analyses searchable in a meaningful way within the database.

However, because meta-studies represent important contributions to the field of online education efficacy research, we have created a downloadable PDF of all of the meta-studies citations we have discovered on the topic of online efficacy research.

Why doesn’t the database include conference proceedings?

Similar to dissertations, conference proceedings were often not available for review to make determinations about whether they met the criteria for database inclusion.

Database updates

Who is updating the database?

The database will be updated by staff of the Ecampus Research Unit.

How often will the database be updated?

The database is updated monthly. To receive email notifications of database additions, you can sign up for the database notifications email list on this page.

How do I submit a missing citation?

We always appreciate help with the database. Submit a missing citation by emailing the Ecampus Research Unit staff directly at ecresearchunit@oregonstate.edu.