Optics of reflection and refraction
Monday opened with Bill Nye's Optics and light DVD, a technological throwback.
The actual depth of a penny underwater versus the apparent depth for different containers produces a linear regression with a slope equal to the index of refraction. The graduated cylinders limit the viewing angle which invokes the small angle approximation that the sine of the angle is approximately equal to the angle. This allows apparent depth to be used to find the index of refraction. I still think I should set up a demonstration with water in a cylinder and corn syrup in another to see if the difference can be seen.
Wednesday I arrived early to set up the refraction demonstrations. I tried a new layout.
Although perhaps this afforded better overall visibility, I am not sure this made anything any clearer. I am not sure the students could clearly see what I was demonstrating. The laser beam is very invisible inflight. Perhaps a little dye in the water would help.
Maybe a fiber optic lamp would help with the wrap up.
Thursday opened with the two basins, one with water one without, to demonstrate refractive effects. This term I forgot the larger coins. Reflection off the surface of the water remains problematic for this demonstration. Probably better done at night with lighting from the sides below the surface of the water.
This term in both the 8:00 and 11:00 class data collection appeared to be problematic. One group obtained distinctly non-linear results. Other groups found slopes of one for both reflection and refraction. One group found an index of reflection of 1.34 and an index of refraction of 1.04. I verified that these were not reversed in class. At this point I have no idea why this laboratory had the consistent biases in measurement that were seen this term.
Rindy holds the meter stick while Siniann reads the measurement
Wayne place the object while Emleen holds the mirror.
Physical science laboratory is not always deadpan boring
Here the layout of the reflection lab can be clearly seen
That the measurements of the indices did not come out as expected works in a course where what we measure is what we know. The results are the results. To use modern terminology, it is what it is. If the students have measured carefully, then their fact is their best knowledge of a system. And this is at the core of the course. Every student in the course is a non-major with respect to physical science. And only one is in a STEM field. So the goal is to show how science works and why science can be trusted, not the pile of facts of science. In this class we do not believe anything. Believe is for faith based activities. We have facts on the ground and if we did a good job ascertaining those facts, then those are our facts. This is a departure from the typical science course as taught.
The course also keeps coming back to data as a driver of mathematical models. Mathematical models are also at the heart of the course. This lab yields two linear regressions.
Desmos provides the mathematical tool to analyze the data. As can be seen above, the data led to what appear to be reversed results, but this was a data set I looked at personally. This is what the students actually measured. These are their facts. And in this course, that is the theme. Systems in physical science are defined by mathematical models that can be found from carefully done measurements of physical systems.
I am keenly aware of the argument that physical science had a bias in that it investigates only systems that render mathematical constructs amenable to the mathematical knowledge available. People find math in those systems that are amenable to the techniques of algebra and differential equations. As math has expanded, so too have the number of systems that physical science can explain. The bias is that sufficiently complex systems tend not be studied because the tools of mathematics that are available to humans fall short.
Large Language Models provide perhaps the first glimpses of systems that will never be reducible to equations. Layers of neural network inputs and outputs that recognize patterns but which have no governing equation.
When some future LLM "proves" something to be true, will that only be statistically true or mathematically rigorously true? Those are questions beyond the scope and design of the course at this time, but perhaps should be brought into the curriculum at some point. The question of "What does it mean to have solved something?" arises very quickly. A ChatGPT can give an answer, but that answer may or may not be true. How could an LLM then prove anything? Yet systems related to LLMs will likely be the best predictor of weather systems sand natural events such as earthquakes in the future - detecting patterns that humans and their algebraic equations will never see.
Comments
Post a Comment