Every study is unique but there are some overarching design features we will demonstrate in the following examples of randomized controlled experiments that have already been conducted in ASSISTments.
Comparing Video and Text Feedback
This study was an initial exploration into replacing text feedback in ASSISTments with matched content videos. It is related to Rich Mayer's Multimedia principles. The ID for this assignment is PSAHVAN – make a copy and take a closer look in your account. When assigned, each student saw 11 problems, 6 were on the Pythagorean Theorem and could be delivered with student supports presented in video or text.
The top level of this assignment uses a Complete All - Linear Order section type. Students must complete everything shown to the right. First they enter an introduction (a single problem PRAUVJS) that prompts students to turn on their computer volume to better access the videos. As this is an older study it was not built using a video check or any accessibility features, a clear limitation.
After proceeding through the Introduction, students are randomly assigned to one of four groups using a Choose One section type. These four conditions provided different orders of problems and each offered students opportunities to access both video and text feedback for a within subjects design. Published work from this study combined Groups 1 and 3 as a single condition (video first) and Groups 2 and 4 as another (text first).
After completing the six problems within one of the four groups, students were brought back out to the top level where they completed four posttest items (PRAUVJT, PRAUVJU, PRAUW4T, and PRAUW49).
Assessing Grit and Mindset
Working with a colleague from the Lytics Lab at Stanford University, this study was designed to investigate the effects of Carol Dweck's motivational messages within ASSISTments. The ID for this problem set is PSAKUSU in case you want to make a copy and take a closer look.
In each condition the students were asked these questions before and after a video to gauge their mindset. This is a stand alone problem set where a skill builder does the randomization.
This is an older study, built using a Skill Builder to control random assignment as well as skill mastery (random order skill builders will assign problems or subsections with problems until students achieve mastery).
To reach mastery, students had to get three prime factorization problems correct in a row.
Each of the three conditions (PPTC, PPTM, and AnM) used Complete All - Linear Order sections, but the outer Skill Builder section controlled mastery settings (3 right in a row, day limit of 100, etc.).
Each condition has two subsections within the Complete All - Linear Order sections. Students first enter an Intro section (also Complete All - Linear Order) and then a Complete All - Random Order "Skill Builder" (SB). Again, this is controlled by the outter level, but mimics a traditional Skill Builder.
For each condition, the Intro has three problems (M1, Video, and M2, where M questions are Surveys and the video messaging differs across condition). These three problems were in "Test" mode so students would not see that all responses were marked as incorrect, thereby allowing the Skill Builder to only count the math problems in the SB sections toward mastery. After working through the Intro problems, students were pulled back out into the Condition subsection (i.e., PPTC) and into the subsequent "SB" section where they worked toward mastery.
After reaching mastery in one of the 3 conditions, the assignment was complete - there was no posttest, not even at the top level.
Assessing Adaptive Homework
This study was designed in two parts to assess the effects of adaptive homework compared to a traditional paper and pencil approach. The IDs for the two assignments that formed this study are PSAMDQP and PSAKWXB in case you want to make copies and take a closer look. These problem sets were assigned using individual assignment across blocked groups of students, with each student receiving 12 problems from the Connected Math textbook for homework (with or without feedback and student supports).
The study used a crossover design to give both groups of students both control and treatment materials for a within subjects design. Group A received feedback and student supports while Group B did not. In order to do this the researcher used individual assignment rather than a Choose One section type as she was focused on whole-class instruction and homework review.
The first problem set featured 12 items from the Connected Math textbook without feedback or student supports.
The second problem set featured the same 12 items from the Connected Math textbook but these feature adaptive feedback and student supports.
Assessing Worked Examples
This study was designed to assess worked examples used as student supports. Published results indicated that tutored problem solving using scaffold questions produced significantly greater learning gains than worked examples. Scaffolding also took significantly more time.
At the top level this assignment is a Complete All - Linear Order section. All students completed three subsections: Pre-Test, Experiment, and Post-Test. The Pre-Test and Post-Test used Complete All - Linear Order sections so students would all receive the same content in the same order in these subsections.
After completing content in the Pre-Test subsection, students entered the Experiment subsection, controlled by a Choose One section type. Because of this, students were randomly assigned to either the Scaffold Question condition or the Worked Example condition. These subsections used Complete All - Linear Order types, matching the content and order effects across conditions but allowing the type of student support to differ: Scaffold Questions or Worked Examples.
If you dive deeper you will see that each condition had 3 problems students had to complete before being brought back out to the top level and into the Post-Test. Once students completed the material in the Post-Test subsection, the assignment was complete.