AI Assisted Feedback in Moodle

April 15 

Laboratory reports in two courses are marked with rubrics. This is an exploration of the capabilities of AI assisted feedback (AIF) in Moodle 4.5. The AIF plug-in was configured in practice mode in assignments for both courses. 

The first example is from SC 250 Botany, a thin layer chromatography analysis of plant pigments. The laboratory report is marked using a rubric as seen below. 

 

The prompt given is seen below along with setting up Generate feedback automatically on submission for Practice mode, which is the formative feedback mode for AIF.

 

For AIF to use the rubric, the Grading Method must be set to rubric. For Practice mode, the marking workflow must be set to No, which effectively turns off the practice mode. There is a summative Teacher-Reviewed mode that involves a different set of settings.    

   


The following is an example from SC 130 Physical Science Ohm's law laboratory exercise. This laboratory is also marked using a rubric.

The rubric used in the assignment can be seen above.





As noted above, AIF recognizes the rubric being used. The directions noted that repeating the rubric was not necessary. In this instance the prompt was kept very simple and did not include grade level guidance. This prompt is perhaps too thin, but that is in part what is being tested in this assignment. Generate feedback automatically on submission was again checked.



Under Grade the Grading method is Rubric, the Use marking workflow is No. The settings above lead to practice mode formative feedback. 

Student AG submitted the following report:




The feedback for the above was:

Feedback on Your Thin Layer Chromatography Lab Report

Overall Assessment
Your submission currently only contains an apology, so none of the required components of the lab report are present. The grading criteria can’t be met without the essential sections: an introduction explaining the purpose and expected results, a data table with pigment names and Rf values, a discussion comparing your findings to known reference values, and a clear conclusion. Additionally, the visual evidence (a size‑reduced photograph of the TLC strip) is missing, which is an explicit requirement.

What You Need to Add
1. Introduction – Briefly explain why you performed thin layer chromatography on plant pigments, what you expected to see (e.g., distinct Rf values for carotene, xanthophyll, chlorophyll a, chlorophyll b, phaeophytin), and the overall goal of the experiment.
2. Data Table – Create a simple table (you can use Google Docs’ table tool) listing each pigment detected, the measured Rf value, and the solvent system used.
3. Discussion – For each pigment, state how the observed Rf compares to the standard values provided (e.g., carotene ≈ 0.95, chlorophyll a ≈ 0.65). Discuss possible causes if the values differ, such as solvent front migration, plate age, or sample concentration.
4. Conclusion – Summarize your findings, state whether your experimental Rf values matched the standard values, and mention any insights or implications for plant pigment analysis.
5. Photograph – Insert your cleaned, size‑reduced photo of the TLC strip. Make sure the image is properly labeled and cited if used.

Grammar and Vocabulary
- Use a consistent, academic tone throughout the report.
- Avoid contractions (“I’m”) and informal phrasing.
- Check for subject‑verb agreement and consistent verb tense (typically past tense for describing the experiment).
- Use precise scientific vocabulary (e.g., “chromatographic separation,” “retention factor,” “solvent front”).
- Proofread for punctuation and spelling; a few minor errors are common but can distract from the scientific content.

Next Steps
1. Gather your experimental data and photograph.
2. Draft each section following the outline above.
3. Review the grading rubric to ensure every criterion is addressed.
4. Ask a peer or instructor to proofread for clarity and correctness before submitting.

Good luck, and I’m confident that once you complete these sections, your report will align well with the assignment expectations.

(This feedback was generated by an AI system.)

At first glance one might think that the AI had failed to ingest the report, but then later there are portions of the laboratory text included in the AI feedback. The AI appears to have hallucinated, saying, "Your submission currently only contains an apology, so none of the required components of the lab report are present. "

Two more laboratory reports came in the next day. Both were more promising. Both students forgot to add their photo of their thin layer chromatography plate. 

For student SH the AI says, "Only a single pigment (yellow) is discussed. The report does not list measured Rf values or compare them with the standard values for chlorophylls, carotenoids, or xanthophylls. A concise table with columns for pigment name, observed Rf, reference Rf, and brief comment would satisfy the grading criteria." And this would be true for the typical chromatography lab. But something went wrong on Tuesday and only the xanthophyll migrated along the plate. During class I had to edit the rubric from:



To:



Although the rubric was edited prior to any submission being made by a student, the AI was clearly still using the first rubric. Of note is that the rubric did not mention the plant pigments by name, but the AI correctly inferred the expected plant pigments. So that is very promising. Lesson learned is that the rubric is ingested at the time the AI feedback is set up, not when the rubric is later changed.

In student MR's report he stated, "That the result I want in my report so I measured my work and the yellow light starts from 3cm to 8cm and the droplet is 2cm to 3cm and the origin of pigments and my RF ratio is; 0.13" This rather vague and confusing, and can only be deciphered by someone who was there in the laboratory. The "yellow light" is the xanthophyll band. The droplet was the location of the plant extract solution. MR failed to report the solvent front distance. So an Rf value cannot be calculated from the data he provides. Reverse engineering his report Rf is challenging because I do not know what he decided to use as measurements. If one takes the start at 2.5 cm and the xanthophyll to be at 5, then his solvent front was at 10 cm, but the plates were not that long. 

The AI comment was:

"Results & Data
- You state an Rf value of 0.13 for a yellow band.
- The calculation appears incorrect: you note the pigment moved from 3 cm to 8 cm (5 cm) and the solvent front from 2 cm to 3 cm (1 cm), giving an Rf of 5 cm / 1 cm = 5.00, not 0.13."

The Rf value is a ratio of the pigment distance to the solvent front - which is always out ahead of the pigment. So values greater than one are not possible. The AI has tried to make sense of MR's confusing and incomplete statement, but MR left out a critical detail and his word choices were vague and confusing. 

I suspect that this system will be excellent at handling more generic essays that include no images, data tables, or calculations. English essays for example. The system is clearly "reading" the submissions, something that did not appear to be the case for AG.  I think this is very promising, I am just pushing an envelope that the LLM model is out at its limits. And my students write some confusing sentences. I was surprised that the LLM model knew plant pigments. Some of my students do not. 

When I told the students about the AI experiment I said, "Some of you use AI to complete assignments, now I am piloting using AI to mark assignments. The future could be you submitting AI assignments which another AI then marks. And when you graduate, you won't actually know anything."

What follows are random time linear feedback as assignments came in. Most of this is just a memory dump for me so that I can remember what happened when.

16 April

The AI does well when there is more text, no images. Perhaps the more text the better in some sense. The issue of the rubric being changed on the fly arose again for the following submissions - so the AI is correct in noting the missing plant pigments based on the original rubric. The feedback is more than some of my students will read and digest ("too long, didn't read"), but still good guidance. The two reports that were submitted were from partners who submitted the same report 24 hours apart. They were supposed to each write up their own report. I must not have been sufficiently clear that they should each do their own report. Although the reports were identical, true to the usual LLM behavior each received slightly different feedback. And while I did not expect the AIF system to detect a duplicate submission, some faculty might.

20 April

A student resubmitted and the response is markedly better. Part of this might be a need to be far more specific in the prompt. Overall the response is surprisingly good. As I said before, if this was a traditional English essay marked with a well designed rubric and a carefully crafted prompt, I think AIF would do a surprisingly good job of providing feedback. 

For example, the LLM has noted:

Adding a brief statement of the hypothesis (e.g., “chlorophylls will migrate less than carotenoids due to higher polarity”) would make the introduction stronger.

I did not mention that the isopropyl alcohol being used was polar, I said instead that molecular weight was a factor in migration distance. This is a prompt fault issue. I have to get comfortable with being more verbose in my explanation of what was used and what was being done.

The LLM wanted to see a materials and methods sections. My laboratory report template used to include a materials section and a methods section. I dropped those over a decade ago from the rubric. The students were so focused on getting the methods section written correctly that they were distracted from focusing on the science at the core of the experiment. The format was a verb-direct object format: Pour 500 ml water into the beaker. Add 10 grams of salt.  Stir until the salt is dissolved. Since the online textbook includes the materials and procedure, students turned to copying and pasting both into their reports. This seemed like fair use to me, so I allowed it. Correct material lists and procedures are going to be essentially identical.

About a decade ago I dropped those two sections to bring a sharper focus on the introduction and on the discussion with conclusions sections. This worked better than I had expected. Again, I think either the rubric, prompt, or both need to be clearer that there is not materials and procedure section.

The data table discussion is surprisingly good. The LLM even correctly caught and corrected a mathematical error. A meta-issue that the LLM could not know (because the rubric changed on the fly) is that as far as I could see in the laboratory, only the xanthophylls separated out. The LLM did not see the revised rubric, having ingested the one that was in place when AIF was set up.

The discussion and conclusion notes are good, again rather surprisingly on point for such an obscure topic.

The grammar, vocabulary, and formatting comments are perhaps the strongest performance by the LLM. Those are correct and often echo what I say over and over again: headings must be done using heading level one, heading level two, not just making the heading bold normal font and centering the heading. 

Perhaps the way to use AIF would be to use the instructor hold on feedback until reviewed. I am thinking that it might be quicker to delete the off target feedback and then release what remains to the student. I think this might be the way to go for some instructors. I am not sure I would see any increase in efficiency. I have a copypasta document that has evolved over the past 19 years to include comments on the most specific errors and get those corrected.

One other thought - in both SC 250 Botany and SC 130 Physical Science the students write a lab report every week. Another professor in the division by contrast, has some lab reports that stretch across multiple weeks with multiple drafts. What that professor is doing is how scientific publishing works: draft, review, and redraft across an extended period of time. My approach is more quantity but at a trade off for quality. Students can do multiple drafts if they wish, but by the next week we are on to the next topic and the next laboratory report. The result is an expectation of a less polished laboratory report. The LLM is looking at publication grade standards, I tend to be more flexible. 

~~~

The is the first of the physical science laboratory reports set up in AIF came in later in the day. I was laughing because I told the students the AI could not see images and so instead of a screenshot of their data table from Desmos, I asked them to make the  data table in Google Docs. So the AI says, " The table is shown as plain text, not as a screenshot, and lacks a proper caption." Because that is what the rubric called for - a screenshot. But when the student dutifully used a screenshot of the graph in the next section, the AI shot back: "Data graph - No graph is present; the section is essentially empty." Damned if you do, damned if you don't as we say in English. You cannot win. Without image recognition capabilities, the AI model is going to issue confusing and conflicting feedback for a course such as physical science. The report is rates suboptimal and no evidence by the AI, but that is way too low. I will be marking some sections at optimal, some at sufficient. This is a non-major science course. Again, that is my fault: I need to learn to craft a prompt that better reflects the leniency I have when I mark reports. That said, the feedback is accurate for text materials. 

Note that the AI is producing inline LaTeX. This is excellent. The catch is that Moodle expects delimiters  \[ … \] or \( ... \) to activate MathJax and display the LaTeX. Maybe that is something I should be adding to the prompt?

April 21

The AI is adapting and learning. This one not only has spot on comments. The AI caught the fact that the Rf values were the published values and not the experimental values. 
Since many groups only observed xanthophyll, discuss whether any of the other pigments showed up in your samples.

My jaw dropped when I read, "Since many groups only observed xanthophyll, discuss whether any of the other pigments showed up in your samples." The AI appears to be reacting to the prior reports as my previous experience suggested that the AI did not ingest the modified rubric. Is this model self-modifying based on prior submissions? Anyway, that line was spookily accurate. And the AI read the table!

To prevent reopening of the assignment and make review of the submission easier, I have been going ahead and grading at the max (less than the max is set to reopen the assignment automatically}

April 17

Wait... how did the AI correctly catch this error in the image? 

3. Label the graph axes: The x‑axis should be “Voltage (V)” and the y‑axis “Current (A).”

April 18

The feedback on this one was good for non-image elements in the report. In this case I decided to go ahead and mark the report. I left the original AIF output in the AI assisted feedback dialog box. I copied that to the Comments and edited the AIF output to delete the image related elements and other elements that were not relevant, including the LaTex. 


In addition, I copied the LaTeX from the AI assisted feedback dialog box. I opened the equation editor in the Feedback dialog box that appears under the AI feedback dialog box. I pasted the LaTeX into the equation editor as seen above.


The equation editor displayed the preview seen above. After closing the equation editor, I still saw only LaTeX in the feedback dialog box. Then I marked the report and saved the grader page. I am aware that in Canvas, formatted LaTeX does not appear until a page is saved and then viewed. 


This does not occur in Moodle: I still saw only the raw LaTex along with the \( delimiters \). This suggests to me that the MathJax library is not active in the grader page. MathJax is an open source library that both Canvas and Moodle use to display mathematical equations. MathJax handles LaTeX, MathML. I used to call the library from my own web pages to render MathML in HTML5.

There was one behavior of which I was completely unaware: once I had marked and saved my marking work, the AI feedback was GONE. That caught me completely off guard. And makes no sense. If I was relying on the AI feedback as my feedback and marking wipes it clean, then how will the student know what changes to make? 


Worse, the Generate AI feedback button is grayed out so I cannot regenerate the lost AI feedback. I presume the student also cannot regenerate the AI feedback. This time I happened to have saved an edited version of the feedback in the comments, but now I am wondering if AIF displays the AI feedback to teh students prior to grading. I am guessing that AIF does display the AI feedback provided that I have not graded the assignment. I think I understand the logic here: I have AIF set up in the formative assessment mode where the assessment is intended to allow the student to improve the submission prior to the submission being given a final grade - a summative assessment. And that makes sense, but it is not the way I run my own workflow. I go ahead and mark the submission, grading the submission. The student can still work on the assignment and resubmit, but that is up to them. They can also do nothing and keep the grade they have. Many students choose not to resubmit. My system ensures that whatever is submitted is graded. If I do this the AIF way, I have to leave the assignment ungraded until they submit their final draft. But I will have no way of knowing which resubmission is going to be their final draft. They can resubmit as many times as they like. 

I can see the solution, and that is to copy the AIF output into the Comments, which I did in this case, and/or copy the output into the second feedback box. This also makes sense to me because the AIF output may often need editing and revising. 

I obviously haven't set up AIF to do summative assessment, but that is not the use case that is useful to my workflow: I want the students to have the option to resubmit. I am not trying to achieve a "single submission, once and done" goal with AI. I want AI to provide formative feedback. Which it does. I have to remember to copy the feedback out of the AI feedback dialog box. 

This report is missing table captions, has a graph that is inappropriately scaled, and does not report the resistance in the conclusion. 

I know my students are "too long; didn't read" and I fear that will be an issue with a verbosity of the AI feedback. My feedback would be much shorter and, hopefully, more to the point, as in the above sentence. The AI responded:

Feedback on Your Ohm’s Law Laboratory Report

Overall Impression

You’ve made a good start with the narrative parts of the report, showing curiosity about how electricity works. However, several key sections are missing or incomplete, which makes it hard for the instructor to assess how well you met the assignment goals. Below is a detailed review based on the rubric, followed by specific suggestions for each area.


1. Introduction

Rating: Sufficient
Why:
- You clearly state the purpose of the lab (explore current, voltage, and resistance).
- The questions you intend to answer match those in the rubric.

How to improve:
- Add a brief sentence about why these questions matter (e.g., application of Ohm’s law in real‑world devices).
- Keep the paragraph tight and focused – avoid extraneous wording.


2. Data Table

Rating: No evidence
Why:
- There is no visible table, caption, or units.
- The rubric requires an embedded image with a caption that lists variables and units.

How to improve:
- Create a table (Excel, Google Sheets, or LaTeX) with columns for Voltage (V), Current (A), and any derived quantity.
- Convert the table to an image (PNG/JPG) or embed the spreadsheet if the platform allows.
- Add a caption: Figure 1 – Voltage vs. Current data (V, A).


3. Data Graph

Rating: No evidence
Why:
- No graph image is present; the placeholders show only blanks.
- The rubric specifies labeled axes and a trendline.

How to improve:
- Use Desmos, Excel, or another plotting tool to graph the data.
- Label the x‑axis as Voltage (V) and the y‑axis as Current (A).
- Fit a line and display the slope; the slope is the resistance (1/R).
- Save the graph as an image and embed it in the report.


4. Data Analysis

Rating: No evidence
Why:
- No image or text demonstrating the mathematical relationship.
- The rubric calls for an embedded image that shows analysis (e.g., regression calculation, slope, intercept).

How to improve:
- Show the regression details:
- Equation of the line: (I = kV + b)
- Where (k) is the slope (conductance) and (b) is the intercept (ideally close to 0).
- Include the numerical value of the slope and translate it to resistance: (R = 1/k).
- Capture this analysis as a screenshot or PDF and embed it.


5. Conclusion & Discussion

Rating: Suboptimal
Why:
- You describe the qualitative relationship (linear) but do not provide the actual resistance value.
- There is no comparison of the experimentally derived resistance to the resistor’s value.
- The discussion lacks reflection on possible errors or uncertainties.

How to improve:
- State the measured resistance (e.g., (R_{\text{exp}} = 12.5 \,\Omega)).
- State the resistor’s labeled value (e.g., (R_{\text{label}} = 13 \,\Omega)).
- Explain the difference: (\Delta R = R_{\text{exp}} - R_{\text{label}}).
- Comment on factors that could contribute to the discrepancy (contact resistance, temperature, measurement error).
- Conclude with a brief synthesis: Ohm’s law is confirmed within a 4 % margin of error.


6. Formatting, Syntax, and Language

Rating: Suboptimal
Why:
- Paragraphs lack proper structure; some sentences are fragmented.
- The report uses informal language (“push harder”).
- No figure numbering or references.

How to improve:
- Use consistent paragraph spacing.
- Introduce figure numbers in the text (e.g., “Figure 1 shows the data table”).
- Replace informal phrasing with academic language: increase the applied voltage.
- Proofread for typos (“wires” vs. “wire”) and grammar.


7. Final Recommendation

CriterionRatingRecommendation
IntroductionSufficientAdd context on importance of Ohm’s law.
Data TableNo evidenceCreate & embed table with units.
Data GraphNo evidenceGenerate graph with labeled axes and trendline.
Data AnalysisNo evidenceInclude regression screenshot & slope calculation.
ConclusionSuboptimalProvide measured resistance, compare to resistor, discuss errors.
Formatting & LanguageSuboptimalTighten prose, use figure numbers, proofread.

Encouragement:
You have grasped the core idea of Ohm’s law and demonstrated a clear understanding of the lab’s goals. By adding the missing visual data, making precise calculations, and tightening your writing, you’ll meet all the rubric expectations and showcase a polished report. Keep up the good work—just add a few more details and it’s ready to shine!

(This feedback was generated by an AI system.)


The How to improve in the introduction is extraneous - the lab did not ask students to comment on why the questions matter and how they pertain to real world applications. But, again, I see the AI as a starting place which I can then edit.

I edited down the above to:

Feedback on Your Ohm’s Law Laboratory Report

1. Introduction: Meets the expectations optimally.
2. Data Table: Missing the caption with the variable name and units of measurement. 
3. Data Graph: The graph should be auto-scaled to pull the data off of the y-axis. 
4. Data Analysis: Complete
5. Conclusion & Discussion
- You describe the qualitative relationship (linear) but do not provide the actual resistance value.
- There is no comparison of the experimentally derived resistance to the resistor’s value.
6. Formatting, Syntax, and Language
- The report uses some  informal language (“push harder”).
- Proofread for typos (“wires” vs. “wire”) and grammar.
(This feedback was generated by an AI system with extensive editing by the instructor.)

Something the student might read. This new generation is not a reading generation. Thirty years ago some students could be seen on campus reading Harlequin romances. Now they scroll their infinite timelines consuming visual content.

I went ahead and graded this. Because I now know that this wipes the AIF, here is the AI feedback and my edited comment, just for the record:

Introduction
The introduction gives a nice overview of Ohm’s Law and explains why the experiment matters, but it could be strengthened by clearly stating the specific questions you plan to answer and what you expect to find. Score: Suboptimal – evidence of intent, but lacks details on the anticipated results.

Data Table
No image of the data table is included, and there is also no caption that lists the variables and their units. Because the rubric requires an embedded image and a caption, this section is No evidence.

Data Graph
The submission has an empty section labeled “Data Graph”. No graph image, no labeled axes, and no trendline are provided. This is No evidence for the rubric criterion.

Data Analysis
Again, the “Data Analysis” section is empty. The rubric calls for an embedded image that shows the mathematical relationship between current and voltage (a fitted line or calculated slope). Without this, this criterion is No evidence.

Conclusion
The conclusion contains an appropriate statement that current is directly proportional to voltage when resistance is constant, and it lists the definitions of voltage, current, and resistance. However, it does not provide the experimentally measured resistance of the circuit, nor does it calculate the difference between this value and the rated resistance. Therefore this part meets only Suboptimal—the key numerical result is missing.

General Formatting, Grammar, and Vocabulary
The report text is generally clear and free of major grammatical errors. Nonetheless, the lack of visual components (tables, graphs, analysis screenshot) and incomplete captions hinder the overall presentation. The rubric’s final criterion, which asks for “proper formatting, syntax, and appropriate command of vocabulary and grammar,” is therefore rated Suboptimal.

Overall Comments
Strengths 
- The introduction sets a good context and shows you understand the concept.
- The terminology section in the conclusion is concise and correctly uses the units.

Areas for improvement 
1. Include all visual data: Attach the Desmos screenshots for the data table, graph, and analysis.
2. Add captions and units: Each embedded image should have a caption that lists the variable names and units (e.g., Voltage (V), Current (A)).
3. Label the graph axes: The x‑axis should be “Voltage (V)” and the y‑axis “Current (A).”
4. Provide the slope and resistance calculation: Use the fitted line to extract the slope (m), which equals resistance (R). Report the value with appropriate significant figures (e.g., (R = 200 \pm 5\ \Omega)).
5. Compare to the resistor’s rated value: State the nominal resistance (e.g., (200\ \Omega)), then calculate the difference (\Delta R = R_{\text{measured}} - R_{\text{rated}}).
6. Keep the sections organized: Insert the images directly under their section headings, followed by brief interpretive text.

Remember that the rubric is designed to guide you toward a comprehensive report. By addressing the missing visual data and explicitly answering the posed questions, your final submission will be rated Optimal. Good luck!

(This feedback was generated by an AI system.)

My edited version:

Assignment: 13 Ohm's law laboratory report
The introduction should tell the reader the intended goals of the experiment. For the Ohm's law laboratory the goals were posted in Moodle rather than on the board. The intent was to determine the nature of the mathematical relationship between current and voltage. The theory being tested was Ohm's law, that the voltage would vary linearly with respect to the current. The slope of that linear relationship would be the actual resistance of the resistor. The resistor also has color band codes for the designed resistance of the resistor. The mathematical difference between the experimentally measured value for the resistance and the designed resistance value of the resistor can be reported as a percentage error.
The table is missing a caption.

The conclusion contains an appropriate statement that current is directly proportional to voltage when resistance is constant, and it lists the definitions of voltage, current, and resistance. However, it does not provide the experimentally measured resistance of the circuit, nor does it calculate the difference between this value and the rated resistance. Therefore this part meets only Suboptimal—the key numerical result is missing.

What I do not know is whether the students can see the AIF after I have marked their submission. I am guessing that they cannot, which must have baffled some of them when I marked their assignment and told them to have a look at the AIF. Oops. Lesson learned. Copy and edit. And, again, to see my comments you would have to go back to the first submission as the assignments automatically reopened after marking. 

Comments

Popular posts from this blog

Traditional food dishes of Micronesia

Setting up a boxplot chart in Google Sheets with multiple boxplots on a single chart

Plotting polar coordinates in Desmos and a vector addition demonstrator