Explaining Learning Analytics


In each assignment, a student is evaluated as both an author (creator) of the Submission and a reviewer (critic and evaluator) of other students’ Submissions along four diagnostic dimensions (indexes) — attainment, bias, controversy, and self-assessment (in)accuracy. The system also indicates the intra-group inter-observer reliability of these measures, based on the implicit consensus among peers.

 

Indexes for Submission are computed based on the benchmarks (ranks) given in the Review phase; indexes for Review (critiques) are computed based on the benchmarks (ranks) given in the Reaction phase.

 

The attainment index indicates the overall “goodness” of Submission (or Reviews) and is the aggregation (average) of peer evaluations (ranks) of the given student’s Submission (or Reviews). It varies between 1 and the desired group size. A missing Submission or Review is assigned the value of 0.

 

The controversy index indicates how much peers diverge in evaluations of a given student’s work. Controversy focuses on the result of evaluations and measures the variance in evaluations of a particular piece of work. It is computed as the average deviation of co-reviewers from each other and varies between 0 (no controversy, or perfect convergence) and 1 (highest possible divergence). If a student missed to turn in a Submission (or Review), the respective controversy index for this student cannot be computed.

 

The bias index indicates how much a given student’s evaluations of peers’ work disrupt the overall evaluation agreement in the peer group, or, in other words, contributes to controversy of other peers’ work. Bias focuses on the impact of a particular student’s evaluations and is computed as the average deviation from co-reviewers and varies between 0 (no bias, or perfect convergence) and 1 (highest possible divergence). If a student missed to turn in a Review (or Reaction), the respective bias index for this student cannot be computed.

 

The self-assessment inaccuracy index indicates how different a given student’s self-evaluation is from the attainment index derived from peer evaluations of this student’s work. It is computed as the deviation of self-evaluation from the attainment index and varies between 0 (very accurate self-evaluation) and 1 (very inaccurate self-evaluation). If a student missed to turn in a Review (or Reaction), the respective self-assessment inaccuracy index for this student cannot be computed.

 

The intra-group inter-observer reliability (IGIOR) (or the group-level agreement) index indicates how far the given peer group as a whole is from the perfect convergence on ranking each other’s work. For a group with the perfect convergence, the group-level agreement is equal 1; for a group with the perfect divergence, the group-level agreement is equal 0.

Learning Analytics For Educators How It Works

Schedule a Demo of Mobius SLIP