Follow

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

I keep seeing a lot of stuff floating around saying things like "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" (seen this morning) and I'm sorry but I find this take incredibly naive. I'm no fan of boring assignments where students do some activity just because and nobody seriously evaluates them or gives them feedback. That's a real problem, but not one this catchy suggestion addresses.

We still teach students addition, subtraction, etc., despite having calculators for decades. Why? Because if you only ever punch numbers into a calculator you don't actually understand numbers! We still teach students how to implement linked lists not because we need more linked list implementations, but because it's a stepping stone to understanding data structures in general. We don't ask students to write essays because we care about having more text sequences that take the form of an essay. We ask them to write essays so they can practice organizing their thoughts and stating their thoughts, opinions, and arguments clearly in a form that other humans can understand (to practice clear communication!).

Replace "AI" in the quote above with "online outlets that do your homework for a price." Or replace it with "parents." We ask for these things not because the outputs themselves are generally important, but because we care about the learning outcomes that arise from a student doing them; learning how to produce these outputs is how we teach students to think critically, or understand numbers or data structures. Yes this can (and often is) done poorly, and that needs fixing, 100%. But asking students to do things others already know how to do is a critical pedagogical tool for building understanding.

Nevermind that these lines of argument give the "AI" too much credit. ChatGPT can't actually do math, for example, it just memorized millions of examples of doing thousands of math problems. It's why it screws up if you ask it to work with really large numbers: it hasn't seen those in its training data so you get the output of smoothing an uneven probability distribution.

EDIT: I want to clarify something important: I'm specifically arguing against the idea that just because ChatGPT-like systems can "do" an assignment we should not use it for teaching anymore, which is some nonsense I've been hearing a lot lately. Because I wasn't clear, some people have read this post as implicitly defending business as usual. That's not what I mean. The examples I gave of assignments are things that *can* be used for excellent learning, but any style of assignment can be given or graded thoughtlessly in a way that leads to no learning at all, and I don't care to preserve those uses. So please do reevaluate assignments and toss ones that don't work; just please toss the ones that actually don't lead to learning (plenty of valid reasons to do this even before ChatGPT), and keep the ones that do even if a few students might use automated systems for them. We already have enough trouble with instructors more concerned with cheating detection than with learning outcomes.

· Edited · · 8 · 83 · 119

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@csgordon This is a good way of putting it! These takes also often seem to include advice like "just design homework GPT-3 can't solve” but I am honestly not even sure where to begin approaching that (and I'm not convinced anyone knows)

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@adrian yeah, the class of things these systems can "solve" encompasses many of the early stepping stone tasks that humans need to learn on their way to the things even OpenAI doesn't pretend the systems can do... Like yeah, I can design assignments these systems aren't helpful with, but humans who haven't worked on the easier problems *also* can't solve the hard problems... I'm curious to see how the recommendations from our university committee on dealing with ChatGPT turn out

@csgordon This is a great point! It's like saying "Why should people bother practicing boiling pasta/throwing a ball/playing guitar chords when they can just have a machine do it?"

Counterpoint to one of your specific examples: I've done some pretty advanced programming over the years and I still have never done the linked list exercise.

@grvsmth sure, it doesn't have to be a linked list specifically, there are definitely paths that don't quite look like my example. But you surely worked up from simpler to more complex over time right?

@csgordon In that particular case I may have tried to do something more complex and had to backtrack and work on simpler problems first 😁

But I definitely have worked up from simpler to more complex on tons of skills, with lots of repetition!

@grvsmth @csgordon I'm not sure I ever did the linked list exercise, but I definitely learned a lot by implementing Bloom filters by hand as an exercise.

There's always value in practicing doing the basic thing by hand, if only to really get insight into how it composes upward into the New Idea that nobody but you has

@trochee @csgordon Yup! I think one reason I appreciate statistical analysis is how much we broke it down to basic steps in my intro stats class...

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@csgordon I might be the exact person who you're ranting about but... honestly I am *thrilled* to see a tool for automating content-free text on demand, because I'm hoping that it'll kill a specific kind of writing assignment.

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@csgordon
I got really good at one point at writing the kind of stuff that ChatGPT writes in order to make page count.

I think I got a lot more in terms of thinking & writing skill from shorter assignments with higher standards re: conceptual clarity and references.

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@nat i totally agree that artificial hard page limits are garbage (i hated those too). But thinking back to the instructors who gave me such assignments (where I also had to write filler), I don't think someone inclined to put those page limits on there, as more important than the qualities it seems we both value, is going to be thoughtful about this. Those are the folks I see going out of their way to apply "AI generated text" detectors with absurd false positive rates, therefore accusing countless innocent students of plagiarism.

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@nat it's possible this is just pessimistic of me, though. Either way I'd be happy to have that kind of assignment disappear.

@csgordon when you put it this way, it seems like the root problem isn't the exercises, it's the fact that students are forced to submit exercises in exchange for grades rather than doing exercises in order to learn

@technomancy Grades are totally orthogonal to this. The argument I'm pushing back on is those saying that "an AI can do this" is a valid measure of whether an assignment is worth signing. I'm saying that that is a terrible reason to discount a task as a source of learning, because especially for beginners, doing tasks we always know how to automate - graded or not! - is often a terrific source of learning.

@csgordon grades are orthogonal to the original question of "is a given exercise worth doing"; sure

I'm not convinced grades are orthogonal to the broader question of "does the availability of generative neural networks make meaningful classroom learning more difficult?"

that is to say, the availability of generative neural networks doesn't threaten meaningful classroom learning; it only threatens the method of judging learning based on grades, which honestly never really worked very well to begin with

@technomancy @csgordon There's a bit of both. In nearly all modern classroom settings, educators have dual mandates: to educate and to assess. The methodologies in both are superficially similar (assignments), but very different. One of the big challenges for educators is keeping this distinction clear: what parts of an assignment drive understanding, and what parts gauge it. It's entirely possible for ChatGPT to be a royal pain for the latter, while being helpful for the former.

@csgordon I agree, and hope most people would recognize the value of the accretion process but I think that many, with their sights focused only on the goal, may not grasp the "why" of what they're doing, and it may never be explained to them. Nor can it in some cases.

I think it can be helpful to preview why something is being done, prior to the assignment, when possible. Understanding the why doesn't prevent people from avoiding the process, but they hopefully acknowledge their loss.

@mhanson101 absolutely. I do this for all of the assignments I give, for exactly this reason.

@csgordon I wasn't meaning that you didn't do this btw, that was just general ramblings on my part. If i had to guess i would have assumed you did.

I would imagine you have less trouble with this kind of issue in general (cheating, assignment avoidance) because of your field, or am I mistaken?

@mhanson101 studies show that rates of cheating are roughly even across all fields. Students generally cheat in any area because they're confused close to a deadline, or botched time management, or are generally overloaded (jobs, life). None of that changes in CS.

Rant about how "The problem isn’t that kids are using AI to write homework assignments. The problem is we’re assigning kids problems AI can do" completely misses the point of educational exercises 

@csgordon Yes! And (as a teacher of maths) we've been dealing with this for years: everytime we try and explain why we want some assessments taken without access to the internet our institution tells us we should set "better questions".

Sign in to participate in the conversation
types.pl

A Mastodon instance for programming language theorists and mathematicians. Or just anyone who wants to hang out.