In my 11th grade Theology and Church Doctrines class, we begin each school year with a unit in conjunction with the college counseling department, in which the students participate in personality profiles, occupational interest assessments, etc. Interwoven with these activities, I am tasked with teaching lessons on the value of work, the importance of rest, and also a lesson on the development of virtue. Unfortunately, there are no clear objectives set forth by the counseling department on what any of these lessons should strive toward, or how they relate to one another. Last year, for the virtue section, I decided to guide my students through the cardinal virtues (as well as the seven vices), though I still wondered how to convey real-world relevance to these 16 and 17-year-olds. This post has helped immensely in connecting the virtues to at least one ever-present temptation in their lives.
Good piece. AI ain’t going away, but there are definitely good and bad ways to use it. I’ve found it very useful for understanding topics I don’t quite grasp. I watch a YouTube video and then realize I don’t understand it. I go to ChatGPT, explain my issue, and wind up with a sort of on-demand tutor I can talk to, ask to clarify, and have correct me when I explain what I’ve learned back. (I wouldn’t trust it with high level things, but there’s stuff that everyone in the field understands that I don’t because I’m not in the field—this is where ChatGPT excels, in my experience.) But that’s not letting it write your paper on the topic for you. I still try to be wary and conscientious about how I use it, how much I use it, and what I let it ‘teach’ or explain (never what’s capital ‘t’ True, only what’s factual), but I find it useful.
It’s just never really about the tool. It’s always about how we use it, and we will use it for ill if we aren’t careful and intentional about it. Really encouraging that at least one student learned that lesson in your class. Virtue is hard (in no small part because it isn’t always clear what virtue means in any given situation!), and it’s great that there are still young people who long for and engage in the Good fight.
> As for justice, while it certainly may be fair to let all students make use of a widely available tool, it does not apportion to the student what they need to flourish intellectually or morally as God’s creature, and thus, is not just.
This is the crux of the argument. The student may see the goal as completing the class with a good grade. The instructor may have the goal of producing a student that has gained understanding of a subject. For the student to portray understanding where there is none lacks virtue.
For a teacher to use an LLM to create outlines and quizzes based on information fed to it does not seem to lack virtue. It aligns with the goal of the instructor and their use of every tool to facilitate the goal seems to fall within virtue.
The real quandary lies somewhere in the middle. Is it virtuous for study groups to break up portions of the coursework and assign each member one part to create a study guide for the whole group? Is it any different if the LLM does that work? Is it any different if cliff notes does that work after reading a text? Or is the only virtuous path to receive instruction from the teacher and assigned materials?
It's interesting to me that the ethical arguments against the use of LLMs to generate text and imagery are so different. With the image engines, the ethical challenges are other-directed: hungry illustrators will lose their jobs because corporations are giving their graphic design tasks to robots. Using image generators doesn't hurt the artist in the same way that using text generators hurts the writer in the ways you've described.
I would say that the use of AI art harms artists in the same kinds of ways: it deprives them of developing skills and attention to line and shading and proportion, etc. Granted, it accentuates other abilities, but it has the same pull toward efficiency that an LLM does, with similar effects on the artist.
I’ve used AI for over 2 years now. The first lesson was to write a prompt that was true to my purpose. I’m far less intentional than I think I am.
This should be part of every evaluation you make! It’s not a sheep machine unless you’re a sheep! Teach critical self-reflection during the learning process!
If you talk to indoctrinators, you’ll get support for how AI is a dangerous indoctrinator. They’ll all say a hammer is a murder weapon. It is, in fact, only driving nails. The nails you selected.
> And like at least two of you, I wonder whether Shakespeare’s genius be so broken down by machine learning that we get a new sonnet or tragedy from the Bard?
I think this needs a 'could' or 'can' in between 'genius' and 'be'.
Maybe should have used an LLM to proof read this. :)
In my 11th grade Theology and Church Doctrines class, we begin each school year with a unit in conjunction with the college counseling department, in which the students participate in personality profiles, occupational interest assessments, etc. Interwoven with these activities, I am tasked with teaching lessons on the value of work, the importance of rest, and also a lesson on the development of virtue. Unfortunately, there are no clear objectives set forth by the counseling department on what any of these lessons should strive toward, or how they relate to one another. Last year, for the virtue section, I decided to guide my students through the cardinal virtues (as well as the seven vices), though I still wondered how to convey real-world relevance to these 16 and 17-year-olds. This post has helped immensely in connecting the virtues to at least one ever-present temptation in their lives.
Good piece. AI ain’t going away, but there are definitely good and bad ways to use it. I’ve found it very useful for understanding topics I don’t quite grasp. I watch a YouTube video and then realize I don’t understand it. I go to ChatGPT, explain my issue, and wind up with a sort of on-demand tutor I can talk to, ask to clarify, and have correct me when I explain what I’ve learned back. (I wouldn’t trust it with high level things, but there’s stuff that everyone in the field understands that I don’t because I’m not in the field—this is where ChatGPT excels, in my experience.) But that’s not letting it write your paper on the topic for you. I still try to be wary and conscientious about how I use it, how much I use it, and what I let it ‘teach’ or explain (never what’s capital ‘t’ True, only what’s factual), but I find it useful.
It’s just never really about the tool. It’s always about how we use it, and we will use it for ill if we aren’t careful and intentional about it. Really encouraging that at least one student learned that lesson in your class. Virtue is hard (in no small part because it isn’t always clear what virtue means in any given situation!), and it’s great that there are still young people who long for and engage in the Good fight.
Good word. It's not about the tool. No doubt there were people screaming when they moved from scrolls to printed text.
> As for justice, while it certainly may be fair to let all students make use of a widely available tool, it does not apportion to the student what they need to flourish intellectually or morally as God’s creature, and thus, is not just.
This is the crux of the argument. The student may see the goal as completing the class with a good grade. The instructor may have the goal of producing a student that has gained understanding of a subject. For the student to portray understanding where there is none lacks virtue.
For a teacher to use an LLM to create outlines and quizzes based on information fed to it does not seem to lack virtue. It aligns with the goal of the instructor and their use of every tool to facilitate the goal seems to fall within virtue.
The real quandary lies somewhere in the middle. Is it virtuous for study groups to break up portions of the coursework and assign each member one part to create a study guide for the whole group? Is it any different if the LLM does that work? Is it any different if cliff notes does that work after reading a text? Or is the only virtuous path to receive instruction from the teacher and assigned materials?
It's interesting to me that the ethical arguments against the use of LLMs to generate text and imagery are so different. With the image engines, the ethical challenges are other-directed: hungry illustrators will lose their jobs because corporations are giving their graphic design tasks to robots. Using image generators doesn't hurt the artist in the same way that using text generators hurts the writer in the ways you've described.
I would say that the use of AI art harms artists in the same kinds of ways: it deprives them of developing skills and attention to line and shading and proportion, etc. Granted, it accentuates other abilities, but it has the same pull toward efficiency that an LLM does, with similar effects on the artist.
Mostly great job on CT 4 days ago.
I’ve used AI for over 2 years now. The first lesson was to write a prompt that was true to my purpose. I’m far less intentional than I think I am.
This should be part of every evaluation you make! It’s not a sheep machine unless you’re a sheep! Teach critical self-reflection during the learning process!
If you talk to indoctrinators, you’ll get support for how AI is a dangerous indoctrinator. They’ll all say a hammer is a murder weapon. It is, in fact, only driving nails. The nails you selected.
> And like at least two of you, I wonder whether Shakespeare’s genius be so broken down by machine learning that we get a new sonnet or tragedy from the Bard?
I think this needs a 'could' or 'can' in between 'genius' and 'be'.
Maybe should have used an LLM to proof read this. :)