Canvas analytics and assessment data week fourteen fall 2021


Although the above chart is a tad confusing visually, the chart shows that Monday, Wednesday, Thursday, and Friday of week fourteen showed upticks in the number of page views on the Instructure Canvas platform. Tuesday held steady week-on-week. This is the first sign of recovery in engagement since midterm. [Chart updated on Monday of week 15]


Overall engagement as measured by page views for week fourteen exceeded the thirteen and week twelve values. Note that this was also the first five day work week since October.  [Chart updated on Monday of week 15]


Engagement in week 14 can be seen to have risen over week twelve and thirteen in this chart as well.  [Chart updated on Monday of week 15]


The number of assignments on the platform showed the most growth, with smaller increases in the number of discussions and media recordings.


Assignments remain the dominant form of interaction on the Canvas platform. 


The record count in the main outcomes table is now 6024 rows. Of these rows, 4945 have outcome scores. The remainder are being used with the scores stripped out. There may be a way to resolve this by post hoc re-assigning scores to rubric ratings, although the details of how this might be best accomplished have yet to be worked out. 

Post-script notes to myself


The Outcome results CSV report from Instructure Canvas includes both an Outcome Score field and a Learning Outcome Rating Points field. For assignments marked by rubrics using point-and-click grading the two columns contain identical values. 

If an instructor manually enters a rating score, then the Outcome score field contains the manually entered score. For a rubric with ratings of 5, 4, 3, and 0, a faculty member can enter a score of 4.5 manually and not click on a rating. In the CSV Canvas reports the 4.5 as the Outcome score. The rating, however, is reported as "Sufficient" and the Learning outcome rating points are reported as 4. The value has apparently been truncated to the integer value. 


For quizzes the behavior of the Learning outcome rating points field can only be described as erratic. In the above excerpt from the CSV file students have scored 8 (Outcome score) out of 8 (Learning outcome points possible) on matching questions on a quiz. The question is linked to a Classic quiz question bank and the question bank is linked to course learning outcome SC130.2. Each row is a student, each student has a perfect score. The Pentascale score calculated from 5 × Outcome score ÷ Learning outcome points possible will be correct. The Learning outcome rating points will vary from zero to five. Thus while the Learning outcome rating points field has the advantage of already being on a five point scale, the conversion for quizzes is erratic and seemingly unpredictable if not random. Note that the first two data rows above show an 8/8 100%, they indicate the learning outcome was mastered, yet the rating is "No evidence" and a score of zero. As they say, "Go figure."

The upside to using Learning outcome rating points, if any, might be that, for assignments, the rating scale does not include manually entered Outcome scores in rubrics. 

All dashboards except the newest one seen above use the Pentascale function to report on a five point scale for both quizzes and tests. This continues to appear to be preferable. Neither field reports points in situations where faculty have chosen to Remove points from rubric. 

Both fields are null in that instance, although the rating may still be reported in Learning outcome rating. One could, post hoc, return values based on the ratings. No dashboard takes this approach at present. 

Dashboards

Course learning outcomes fall 2021
General education outcomes 2021.3
Course learning outcomes assessment fall 2021 dll sis
Pentascale average time series fall 2021
Learning outcome ratings on assignments only 2021.3
Spectacular dashboards: a cautionary tale

Reports

Week one: assessment available at sunrise on a new term
Week two: SIS disaggregation and a deep dive into mapping decisions
Week three: SIS disaggregation and another dive into mapping complexities
Week four: a focus on subaccounts and campuses
Week five: pivot of state of origin versus gender
Week six: dashboards can be focused
Attendance versus performance in statistics
Week seven: outcome performance at the course level and other dashboards
Week eight: update at midterm
Week nine: a look at score and grade distributions
Week ten: grey goo revisited
Week eleven: Update on the numbers
Week twelve: Page views collapse
Week thirteen: Holiday impact or disengagement?
Week fourteen: Some indications of reengagement

Comments

Popular posts from this blog

Plotting polar coordinates in Desmos and a vector addition demonstrator

Setting up a boxplot chart in Google Sheets with multiple boxplots on a single chart

Traditional food dishes of Micronesia