Assessment of a common assignment

A consultant recently proposed that faculty individually select an assignment for which an agreed upon rubric will be applied. He further suggested tabulating the number of students who met the outcome at a high, medium, low/minimal level or who did not meet the outcome at all. Due to prior obligations, I had been unable to attend the meeting with the consultant, but once I saw the notes I realized that general education science had the necessary data to report results sliced in this manner.

Back in August 2012 I had prepared a three year overview of data collected against a rubric. I reported out averages for each year, but the original data could be resliced by number and the rubric happened to lend itself well to four categories.

The rubric included four metrics rated on a scale of one to four, which could be mapped to the categories listed above. The first metric measured the use of scientific procedures and reasoning.


High Medium Low Not met
Metric: Scientific Procedures and reasoning
Accurately and efficiently used all appropriate tools and technologies to gather and analyze data Effectively used some appropriate tools and technologies to gather and analyze data with only minor errors Attempted to use appropriate tools and technologies but information inaccurate or incomplete Inappropriate use of tools or technology to gather data

The counts of students by year, course, and location for this metric are as follows.

Count - proc

proc



Fall Course Campus not met low medium high Total Result
2009 SC117 Chuuk 1 4 2
7


Pohnpei
6

6

SC120 Kosrae
3

3


National 2 10

12


Yap
3 1
4

SC130 Kosrae

5
5

SC255 National 2 12

14
2010 SC117 Pohnpei
2 7 2 11

SC120 Kosrae 1 4 4
9


National 1 4 4
9


Yap 1 2 7 1 11

SC130 National
4 15 1 20


Pohnpei 1 3 5 2 11
2011 SC117 Chuuk 24 8

32


Pohnpei 1 12 2
15

SC120 Kosrae 5 11 3
19


National 13 17 8 1 39

SC130 National 11 12 3
26


Pohnpei
5 5
10
Total Result

63 122 71 7 263

Although, as noted in the August report, there are inter-term rater issues, the data is dominated by students achieving as the low/minimal level, followed by medium and then not met. Very few students are rated as having achieved the equivalent of "high" level.

A similar analysis can be run on the three other metrics on the original rubric. The above data is only for a single rater as in some terms for some courses a second reader was not used. The other metrics return similar distributions as the first metric.

Recommendations for possibly improving the results were made in 2010, 2011, and 2012. To date there has been difficulty in gathering back in information on whether faculty are deploying any of the recommendations.

At present 2012-2013 data is not yet available. With the rubric in use for four years now, there are also meta-questions on whether this particular process should be continued or whether other approaches to assessment might be examined.

Comments

Popular posts from this blog

Plotting polar coordinates in Desmos and a vector addition demonstrator

Setting up a boxplot chart in Google Sheets with multiple boxplots on a single chart

Traditional food dishes of Micronesia