Assessing learning in introductory statistics

MS 150 Introduction to Statistics has utilized an outline based in part on the 2007 Guidelines for Assessment and Instruction in Statistics Education (GAISE) and on the ongoing effort at the college to incorporate authentic assessment in courses. The three course level student learning outcomes currently guiding MS 150 Introduction to Statistics are:
  1. Perform basic statistical calculations for a single variable up to and including graphical analysis, confidence intervals, hypothesis testing against an expected value, and testing two samples for a difference of means.
  2. Perform basic statistical calculations for paired correlated variables.
  3. Engage in data exploration and analysis using appropriate statistical techniques including numeric calculations, graphical approaches, and tests.
The first two outcomes involve basic calculation capabilities of the students and are assessed via an item analysis of the final examination (original was a test inside Schoology.com). 54 students in three sections took the final examination.

The first course learning outcome focuses on basic statistics. Twenty questions on the final examination required the students to perform basic single variable statistical calculations on a small sample. Based on the item analysis, 82.5% of the items were answered correctly by the students. Spring 2015 produced a success rate of 78.5%. Fall 2014 the success rate was 82.6%. In general basic single variable statistical calculations are an area of strength for the students and performance tends to be very stable term-on-term.

Performance on the second course learning outcome, linear regression statistics, was measured by nine questions on the final examination. Student performance on this section was 70.6%. Spring 2014 the average was 69.6%, fall 2014 the average was 68.2%. This section of the final examination has historically been weaker than the basic single variable statistics section, and that weakness was seen again fall 2015. The term-on-term performance, however, is stable and the use of an on line examination for a second consecutive term shows no significant impact on performance.

Performance on the third course learning outcome, open data exploration and analysis, as measured by points awarded is not comparable term-on-term. The scoring system for the open data exploration section of the final examination varies term-on-term. Performance is always weaker on this open data exploration and analysis section than on the first two learning outcomes. Students perform strongly when asked to calculate a specific statistic, students struggle when raw data and open ended questions are posed about the data. The students responded to this section with a single essay question set up using Schoology. This one question was then marked by the instructor.

The students performed very weakly this term on the open data exploration exercise presented during the final examination. The exercise was, in retrospect, the most difficult ever assigned. The multi-column layout was novel and the concept of running multiple correlations had not been covered during the course. That said, the students could have sussed out that a correlation was the way forward, and two of the fifty-four saw this path and went straight to the fully correct answer including supporting correlation values. The students are still generally limited to working with only exactly what they were taught. Extending existing capability into a slightly novel situation is difficult for the students. The course is, after all, effectively only a first introduction to basic statistics.

Of interest was that a week earlier the students had performed much better on a more familiar data exploration exercise, one in which the data structure conformed more closely to structures seen in class. In that exercise the students had presented their results to their classmates. The decision to have the students present to the class was taken late in the term and was not in the syllabus. The class room had acquired a new LCD touch screen SMART board unit. The arrival of the unit the weekend before the last week of class presented the opportunity for me to make a presentation based assignment.

Meigan sets up her presentation on the new LCD panel

I opted to use the presentations as this is a more authentic form of assessment: make an analysis and report to a group. Time was constrained, the students had only one class day (Wednesday) to work on the presentation. The exercise was assigned on Monday, the presentations were on Friday, mimicking a "have a presentation on the new data ready for Friday's meeting..." type work assignment.


The presentation was marked using a rubric based on the quality of the solution. Note that only 37 of the 54 students who sat the final exam gave presentations the week before. The presentations was not technically a course requirement, and some students undoubtedly chose not to do the exercise. Others were likely not ready to present. Presentations had to be submitted by 8:00 on Friday morning, students who were late to submit were not included. This was designed to prevent students from working on their presentations during class on Friday. The percentages below are out of 37.

Optimal statistical analysis, correct conclusion: 6 students (16.2%)
Reasonable statistical analysis, correct conclusion: 2 students (5.4%)
Minimal statistical analysis, correct conclusion: 4 students (10.8%)

Using the final examination success rates and the presentation scoring, the longer term course learning outcome averages provide some context for these values. Performance on both the first course learning outcome, basic statistics, and the second course learning outcome, linear regression statistics, is stable over the past three years. The open data exploration is highly variable as rubrics vary and intrinsic difficult changes. This term the presentations scores were used to generate the open data exploration average and standard deviation.


In the above chart the centers of the yellow topmost circles are located at the average success rate for the students on questions under the first course learning outcome - basic single variable statistics. The chart reports results from 2012 to present. The radii are the standard deviations. The middle blue circles track performance under the second course level learning outcome, paired dependent data. The orange bottom-most circles track performance on the open data exploration and analysis. This open data exploration and analysis section was introduced in 2012.

Overall success rate on the final examinations has been exceptionally stable over the past three years, and generally stable for the past decade. The long term average success rate is 73.5%, the current term saw a 76.5% success rate on basic and linear regression statistics. Prior to 2012 open data exploration was not included in the final examination, that section is excluded from the longer term analysis.


In an educational world where a common goal is "continuously improving" best practices, the inert stability of the success rate above might be seen as a failure to continuously improve. The effort to continuously improve mathematics education overall goes back not to the new math of the 1960's but much, much further. Ultimately there are long term average success rates, and statistics assures us that numbers tend to return to long term averages. A look at the running cumulative mean success rate on the final examination since 2005 suggests that the longer term mean to which terms return might be improving, but even this statistic is subject to a tendency to return to an even longer term mean.



In general students who complete the course are able to successfully make basic statistical calculations on 72% to 74% of the questions posed.

The course average over time includes performance on homework, quizzes, and tests. Course level performance underlies course completion rates. Data on course level performance is available from 2007 forward.


The course wide average has a long term average of 77.8%. The radii of the circle is proportional to the standard deviation of the student averages in all three sections of the course. The standard deviation is fairly constant over time at about 15%.

Schoology provides the ability to get at question by question item analysis of all quizzes and tests delivered using Schoology. The final examination item analysis provides specific information to an instructor on what concepts were mastered and which were not. Note that Schoology generated the data, LibreOffice.org was used to generate the data bars chart.


This is the level of assessment data that provides an instructor with specific guidance on how to improve the course at a concept-by-concept level.

Comments

Popular posts from this blog

Plotting polar coordinates in Desmos and a vector addition demonstrator

Setting up a boxplot chart in Google Sheets with multiple boxplots on a single chart

Traditional food dishes of Micronesia