As teachers and administrators raise concerns about the abuse potential by students of artificial intelligence tools such as ChatGPT, we have concerns of our own. The current use of these tools by teachers at FHS in the grading process is inappropriate and can never be as fair as a human grader. An algorithm cannot possibly determine the creative value of a piece of writing, or effectively evaluate the effort spent by the student. We, the Editorial Board of The Falconer, urge the administration and all teachers to immediately cease the use of these tools in the grading process, for the sake of fairness and the integrity of the teaching profession.
When AI is used to grade, it fundamentally threatens the student’s right to privacy. A student’s unique work is their own intellectual property, and a teacher who arbitrarily tosses a student’s work into an AI tool risks future unauthorized use or reproductions of that work. This is especially crucial for those who wish to continue to adapt and expand upon the work they began in high school into a future profession. Suddenly, one’s own work could be at risk for being stolen or copied before they ever get a chance to finish it. Many AI tools, including ChatGPT, are notorious for their lackluster privacy practices, and it is simply irresponsible and lazy for a teacher to haphazardly use such tools during the grading process.
Additionally, we condemn in the strongest possible terms the obvious breach of trust that employing AI tools brings, especially if used without a student’s prior consent. Writing a paper can be a creative outlet for a student to express their creativity and explore their struggles, insecurities and personal life. To use such tools indiscriminately on student papers is a violation of the trust they’ve placed in that teacher.
Using AI to help teachers manage the load of grading with which they are tasked seems like an efficient fix-all, but, in this case, the efficiency forsakes fairness. Some would argue that an algorithm-based approach to grading, such as that employed by ChatGPT, is the ultimate method to level the playing field and ensure students receive unbiased, consistent grades. However, we do not agree. How is an algorithm to measure the creative value of a student’s short story or poem? How is an algorithm to take into account a student’s improvement over time or effort put into their work? This is made especially unreliable if students do not know the parameters their teachers are entering into the AI tool. Ultimately, grading with these tools encourages a formulaic approach from students to their school work, as they prioritize meeting conditions that please the AI as opposed to expanding their creative boundaries. A good grade from an AI algorithm does not equal high quality work, and it often leaves students confused as to why they received the grade they did.
We also feel that the use of AI in grading, particularly for writing-based assignments, is a huge disrespect to the time and effort of the student, and to the integrity of the teaching profession. We take issue with reducing something so subjective as a student’s self-expression in the forms of their thoughts, creativity and words to the work of an algorithm. We expect our English teachers to want to read our work and respond to it, whether it’s composed of vulnerability or is simply a literary analysis. Additionally, when teachers of all subjects don’t examine our work themselves, they miss valuable insights as to where we are struggling and how they can help us. We don’t expect perfection, we just expect our teachers to care enough to give us their opinion themselves, because, quite frankly, most of us truly care about what they have to say.
Some FHS teachers claim they only use these tools to create “feedback” for students, and repeatedly assert to their students that it ultimately plays no role in the grading process. However, if a teacher is truly and honestly evaluating all submitted work, then the use of an AI tool is a redundant step and is thus unnecessary. AI would thus only be utilized if a teacher was truly not spending an appropriate amount of time grading and evaluating a student’s work. There is no “feedback” a robot could provide that would be remotely beneficial to students.
Ultimately, the Editorial Board believes that AI tools must stay away from the grading process, regardless of how a teacher chooses to incorporate it. We, as students, couldn’t care less about what a robot has to say about our work – we want to know the true and honest opinion of our teachers. If we can’t use it to write, then you can’t use it to grade.