Creating an Experiment

Step-by-Step User Guide

Introduction

The web portal of UpGrade provides an easy way to manage A/B testing in your educational software. This is a step-by-step guide to setting parameters and launching an experiment in the UpGrade UI.

We also have a video tour available.

Setting up a new experiment

After UpGrade has been set up on your cloud infrastructure using the instructions in the Developer Guide, navigate to your instance of UpGrade. After logging in using your credentials, you should see the UpGrade home page.

If no experiments have been created yet, the experiment list will be empty, and you will see two options to create an experiment.

  • Import Experiment allows you to upload a pre-existing JSON file containing the experiment design parameters, that might have been exported at an earlier point in time.

  • Add Experiment starts the experiment creation wizard, where you can manually enter the experiment parameters.

Click on Add Experiment to start creating an experiment.

Start by entering a Name for the experiment and an optional Description.

The App Context is where the experiment will run. This is the name of the client application, which was set up in the Developer Guide.

Next set the Unit of Assignment. UpGrade allows users to be randomly assigned to different conditions at the individual level, group level, or within-subjects. In individual random assignment, participants (such as students in the same class) can be assigned different conditions. In group random assignment, all participants within a group will receive the same condition assignment. If you select group you will be prompted to select a group type. Custom group types are also allowed. For within-subjects, each time a participant reaches a decision point, they will receive a new condition.

The Consistency Rule of the experiment is used to control coherence of learning experiences; for instance, to ensure that either everyone in the group gets the same condition assignment or they all remain excluded from the experiment. Read more about how this parameter works in the Glossary.

The Design Type of an experiment allows you to choose between different experiment structures. Current supported types are Simple and Factorial.

You can also optionally add tags to the experiment.

We are setting up an experiment that compares the effectiveness of two different versions of the same lesson. To design this experiment, you have to create two conditions in the Experiment Design tab, and give each of them an assignment weight. The assignment weights should add up to 100. After the conditions, you have to define Experiment Sites for the experiment. Each site is represented by its corresponding Experiment Point and ID. Experiment Point is the place or function in code where the conditional code execution happens. For more description, please refer to the following papers: UpGrade: An Open Source Tool to Support A/B Testing in Educational Software and Optimizing an Educational Game Using UpGrade: Challenges and Opportunities.

Last updated