So far, I have not taught an online course. One of the reasons I hesitate is my concern about the validity of assessment. Or to put it more bluntly, it seems too easy to cheat. A recent article on Bright, reinforces my concerns: It’s about a company called StudyPool.
“Studypool, one of a bevy of on-demand tutoring platforms entering the ed-tech landscape in the past couple years, is being used as a vibrant marketplace for cheating and plagiarism.”
(hat tip to Marginal Revolution)
When colleagues ask me why I don’t assign out of class group projects, I often suggest they google the phrase “I hate group projects.” Where they will find comments like this:
” I hate group projects. I’ve only had ONE group project EVER where I didn’t end up doing the majority of the work (it was my one friend). I remember a few years ago, I had to do a group project with two of my (slower) friends. We had to look up information about like ten olympic events–I think I put them in charge of finding the information for two of the events each, which gave me six to do AND the powerpoint. We only had the class time to do it. They were literally playing games on the computer the entire time and I had to do all the work, even their parts. At the end of the class, they said, “Ugh, he never gives us enough time to do these projects.” Luckily, I had managed to throw together all the information (the project was pretty bad) and I (well, “we”, but you know) managed to get a B. “
It occurred to me that there might be more material on Youtube. Was there ever:
And that’s just a small sample.
A paper published in the most recent issue of the journal Intelligence has important implications for value added measures of teaching. Here is the abstract, I have underlined the relevant sentences:
“Low socioeconomic status (SES) children perform on average worse on intelligence tests than children from higher SES backgrounds, but the developmental relationship between intelligence and SES has not been adequately investigated. Here, we use latent growth curve (LGC) models to assess associations between SES and individual differences in the intelligence starting point (intercept) and in the rate and direction of change in scores (slope and quadratic term) from infancy through adolescence in 14,853 children from the Twins Early Development Study (TEDS), assessed 9 times on IQ between the ages of 2 and 16 years. SES was significantly associated with intelligence growth factors: higher SES was related both to a higher starting point in infancy and to greater gains in intelligence over time. Specifically, children from low SES families scored on average 6 IQ points lower at age 2 than children from high SES backgrounds; by age 16, this difference had almost tripled. Although these key results did not vary across girls and boys, we observed gender differences in the development of intelligence in early childhood. Overall, SES was shown to be associated with individual differences in intercepts as well as slopes of intelligence. However, this finding does not warrant causal interpretations of the relationship between SES and the development of intelligence.”
It is well understood that children in a classroom start at different levels, valued added assessment attempts to control for this by comparing gain scores. In other words, the child’s test score at the beginning of the school year is subtracted from the child’s score at the end of year. The increase is assumed to be the value added to the student by the teacher.
However, this stands on the assumption that children learn at the same rate. This paper (“Socioeconomic status and the growth of intelligence from infancy through adolescence”) tells us that the slope of of the line between beginning and end of year test scores is related to social class. For our purposes here we need not concern ourselves with the direction of causality or why this correlation exists. All we need to know is that scores on intelligence tests are strong predictors of academic achievement. Thus, we can predict that, in general, students from higher socioeconomic backgrounds will show greater value added and these measures will be unfair to teachers who teach children who live in poverty. Overtime, this will create a disincentive for our best teachers to work with the children who most need their help.
A post in The Conversation tells us:
“Universities and governments around the world rely on student evaluations to assess university teachers and degrees. Likewise, potential students check online ratings when deciding where to study. These evaluations are based on the logic that students must know best what helps them learn. So it’s surprising to discover that students may be the worst people to ask about the quality of education.”
The article also highlights the problem with student evaluations of instructors:
“Many educators worry that students are more positive about teachers who give better marks regardless of what the students learn, and are more negative about teachers who make students work hard in order to learn. If this is true, it means the simplest way for a teacher to get a good evaluation is to make it easy for students to get good marks.
As it happens, students who rated their current teacher most highly got better marks in their current course but did much worse in later courses. This confirms the fears of educators: students’ evaluations are linked with current grades, but also with students’ failure to learn things they need for the future. So, a student who is happy with their grade and teacher should worry — they may not have learnt that much.”
A paper published in Innovative Higher Education reports that students in on line courses give better evaluations to instructors they think are male, regardless of the actual gender of the instructor. Here is the abstract:
“Student ratings of teaching play a significant role in career outcomes for higher education instructors. Although instructor gender has been shown to play an important role in influencing student ratings, the extent and nature of that role remains contested. While difficult to separate gender from teaching practices in person, it is possible to disguise an instructor’s gender identity online. In our experiment, assistant instructors in an online class each operated under two different gender identities. Students rated the male identity significantly higher than the female identity, regardless of the instructor’s actual gender, demonstrating gender bias. Given the vital role that student ratings play in academic career trajectories, this finding warrants considerable attention.”
This is a very interesting study, however the sample sizes were very small and we would certainly want to see the results replicated. To their credit the authors acknolwedge this limitation:
“First and foremost, these results need to be replicated in other similar online classes. A single case study cannot establish a broad pattern. However, it does suggest the existence of one and provides incentive for further exploration”
Hat tip to Boing Boing.