Frequently Asked Questions
Can I use E-TRIALS to test a commercial product?
Yes – If you do not have a conflict of interest with the product you are testing.
If you are trying to evaluate an existing educational product, E-TRIALS offers an excellent environment to compare your product to normal instructional practice. We believe it is critical to establish a research environment in which products and interventions can be evaluated for efficacy and results can be shared in an open and transparent manner.
For instance, DragonBox made strong claims about how much learning their commercial product could produce. Long & Aleven’s 2014 study evaluated those claims and revealed that they were exaggerated.
If you have a conflict of interest with the product that you are testing (i.e., your pay depends on the product's efficacy), you should negotiate a licensing fee with the E-TRIALS team.
Can I solicit personally identifiable information?
If you are using E-TRIALS to conduct research with your own, private subject pool and your materials meet the requirements of sound human subjects research, you have more freedom to solicit information.
How should I get my IRB to approve?
For Use of Data
Universities' IRBs have granted approval for use of E-TRIALS in different ways. As of 2018, eight researchers have gained the approval of their home institution's IRB. Approvals have fallen into one of four categories:
Vanderbilt University, Nashville, TN
Harvard University, Boston, MA
University of Maine, Orono, ME
University of Colorado, Colorado Springs, CO
Survey and Testing
Exempt under Number 2 (§46.101b4) ("survey and testing" w/o personally identifiable information)
Carnegie Mellon University, Pittsburgh, PA
Southern Methodist University, Dallas, TX
Normal Instructional Practice
Exempt under Number 1 (§46.101b4) (normal instructional practice)
University of North Dakota, Grand Forks, ND
Exempt under Number 4 ( §46.101b4) (subjects cannot be identified)
Teacher College at Columbia University, New York, NY
If you need other examples, please contact nth[at]wpi.edu.
For Running Studies
The four steps you must complete to run a study in E-TRIALS are:
Design your study that will be used with our subject pool so that WPI's IRB would view as qualifying as "normal instructional practice" one of the exemptions that the WPI IRB uses to approves this whole system. That does not mean you need to get your institution's IRB board to use that same exemption, please see Commonly Asked Questions below for the many different ways universities treat this questions. Nonetheless, the WPI IRB needs to see your study compares normal instructional strategies.
Design a great study with well-thought out research questions. Even if WPI thinks your study qualifies as "normal instructional practice," and your home institution's IRB approves it, Professor Heffernan needs to think your research question and your content is good and that it will not embarrass ASSISTments's credibility with teachers and students. Professor Heffernan needs think that your study is minimally disruptive, defined here.
While E-TRIALS provides researchers with anonymized data, it may be possible to link the data back to individual students or teachers despite our team's best efforts to protect their identities. Student-level data is covered under FERPA. By using E-TRIALS or the data it provides, you are confirming that you agree to the following terms and conditions:
You will not use the data to discover personally identifiable information about individual students or teachers.
If you discover data that reveals a students' identity, you will immediately inform the E-TRIALS team (ASSISTments-Research[at]wpi.edu) of the issue and delete it from your downloaded file.
You agree not to give the data to a third party.
You agree not to commercialize the data in any way.
You agree not to use the data in any other malicious manner.
What research issues should I be cognizant of while using E-TRIALS?
1. Lack of context
While E-TRIALS provides many covariates and offers context that other educational research studies do not capture, the system does not capture the full context of the study environment for each student. A lack of context may prevent you from knowing if the teacher has done something that will nullify the benefits of your treatment. Students in the control may have been terribly confused and may have kept asking the teacher for help. While a lack of context is a potential weakness, it will likely dilute the effects of a treatment rather than inflate them.
A second more troublesome issue is contamination effects. In their review paper, McMillan et al, (2007) stated “an important principle of good randomized studies is that the intervention and control groups are completely independent, without any effect on each other. This condition is often problematic in field research.” In real classrooms, a student in one condition may show their assignment to a student in another condition. This will more likely dilute effects rather than inflate them. One solution for this problem might be to add in a self report question, like “Collaboration is good thing in learning. Did you collaborate with anyone else on this assignment?” In Kelly et al., (2013) we found that some students in the control conditions (that represented a business as usual condition), self-reported that they texted their friends asking for help. We realized that this diluted the effect size we estimated. In this case, the effect size was large and we still found reliable differences.
As McMillan and colleagues suggest, “when control subjects realize that intervention subjects have received something ‘special’ they may react by initiating behavior to obtain the same outcomes as the intervention group (compensatory rivalry) or may be resentful and be less motivated (resentful demoralization).” Compensatory rivalry will dilute effects while resentful demoralization will inflate effects. Debriefing could include surveys to assess whether students and teachers had noticed that conditions were different.
3. Internal validity
Differential attrition is another threat. Since the posttest-section is always at the end you may find that the difference in posttest results are due to students in the different condition completing the posttest measures at different rates. It turns out that differential attribution will be a threat that can fully control and use in a positive way. If one condition causes students to not complete their work and fail to do the problems on the posttest-section, that, in and of itself, is a useful dependent measure.
Since the posttest-section is always at the end you may find that the difference in posttest results are due to students in the different condition completing the posttest measures at a different rates. It turns out that differential attribution will be a threat that you think you can fully control and in fact use in a positive way. If one condition causes students to not complete their work and fail to do the problems on the posttest-section, that in and of itself is a useful dependent measure. You may find a condition that causes some students to not complete their homework, but for those who do finish, the effect is big enough to still show that if you do finish it is a better condition to be in. We call this a “tough-love” condition, it causes a student to quit but if they don’t quit it is significantly better.
Another potential threat to validity is caused by sequencing effects as students are exposed to a series of experiments. It is possible that carryover effects from one study will influence the results of a later study.
Separate independent randomization is used to prevent this, but we can put into place automatic blocking, that when randomization is done for study #2 we block to make sure there is an equal number of students in each conditions in study #1 assigned into an equal number of conditions in study #2.
The effects of study #1 will just increase variance, making it harder to detect difference but not threaten validity if a finding is found.
5. Novelty effects
A final threat to internal validity is novelty effects, or Hawthorne Effects. A novelty effect is any new or different condition that improves learning simply because students are paying attention to it based on its novelty. A condition that you submit may best our certified control condition but the effects may not generalize to other problem sets. Novelty effects inflate an observed effect size. Ultimately we are able to detect novelty effects through replication, by applying an idea multiple times to see if it loses its novelty.