728 x 90

ChatGPT Can Get Off My Lawn

ChatGPT Can Get Off My Lawn

Will artificial intelligence become the greatest boon to higher education since online learning? (This assumes that online learning was a boon, which is a topic for another day.) Or will it mean the utter destruction of academia as we know it? Those are the two views I see expressed most often these days, with various individuals I respect taking opposite sides.

As someone who is naturally skeptical of this kind of over-the-top rhetoric, I believe the answer lies somewhere in the middle. Despite the forceful yet mixed messages surrounding AI and its applications to higher ed, I have so far in my work been affected by it very little. Although I could be wrong, I don’t expect to be much affected by it in the future.

So: Should I change the way I do everything to accommodate this latest “latest thing?” Or should I run for the hills and pray for the mountains to fall on me? Perhaps I should do neither, confident that the more attention a new toy receives, the less it probably deserves.

The suddenness with which AI arrived on campus last winter, in the form of ChatGPT, and the speed with which it became, overnight, all anyone was talking about, are reminiscent of other much-ballyhooed events of the not-too-distant past. Remember Y2K? Our computers would all stop working. Airplanes would fall from the sky. Civilization would be thrust back into the Stone Age. Yet, as I strongly suspected would be the case, none of that happened. It turned out to be a big “nothingburger,” as they say.

Or how about the introduction of the Segway scooter back in the early 2000s? Does anybody else remember the hype surrounding that? It was supposed to “fundamentally change” the way we all live. Spoiler alert: It didn’t.

More recently, I could point (with some trepidation) to the Covid panic of spring 2020, when we were treated to scenes of Chinese people dropping dead in the streets, shots of freezer trucks outside New York hospitals, and running death counts on the nightly news. The implication was clear: This respiratory disease was on par with Ebola or the Bubonic Plague. Yet none of that, or at least very little of it, was real.

It is now apparent that, if we subtract from the highly-publicized totals those who died with the virus as opposed to from the virus—as well as those whose deaths were actually caused by the treatments they received (or failed to receive) and those who died due to other “mitigation” measures such as lockdowns—the Covid “pandemic” amounted to little more than a couple of bad flu seasons, if that.

In other words, the pandemic, too, was mostly hype. It was never as bad as the government and public-health officials told us it was. But we bought into it, anyway. This has become a primary feature of modern society, the so-called “information age,” in which relatively minor events are regularly blown out of all proportion by the potent combination of “expert” opinion and media, especially social media.

The current obsession with all things AI seems to me to be just the latest iteration of this trend. I don’t think it will turn out to be a complete bust, like the Segway, but I do think it will soon become endemic, just part of the landscape, like Covid and flu. I may be wrong; time will tell. Perhaps a year or two from now I will be embracing AI enthusiastically and penning a giant mea culpa. But I doubt it.

Meanwhile, how should those of us who teach in non-computer-related fields respond to the existence of AI and all the hype surrounding it? As someone who teaches primarily college writing, I have colleagues who are enthusiastically embracing AI, changing all their assignments, and encouraging students to “work with it.” Although I like and respect many of those individuals, I take issue with their approach. As teachers of the humanities, in particular, we have a different job.

I was taught that the “humanities” encompass all that makes us uniquely human: art, literature, philosophy, and religion. The purpose of offering humanities courses is to help students more fully embrace their humanity—to think for themselves, expand their minds, explore and come to terms with their deepest hopes, dreams, and fears. Artificial intelligence, it seems to me, is the antithesis of all that, as even the very name suggests.

What, after all, is the reason for allowing students to use AI in the humanities classroom, much less encouraging them to do so and teaching them how? Because they will probably be using it at some point in their professional lives and maybe even in other courses? Fine. Let them learn how to use it elsewhere (if indeed they really needed to be taught). Because it “makes things easier for them?” What exactly are we making easier? Thinking? Why in the world would we want to do that?

Every humanities teacher knows that thinking well is hard work, that it does not come naturally to most people, that they therefore must discipline themselves to do it consistently, and that becoming a clear thinker is nevertheless a worthwhile pursuit because it brings great personal and professional rewards. For the life of me, I don’t understand why we would want students to do something that requires them to think less or suggests that turning their thinking over to a machine is a good idea.

And what about writing? One of the things I keep hearing from AI enthusiasts is that we can still teach thinking but allow students to use AI to help them express their thoughts. No, I’m sorry, it doesn’t work that way. Every writer understands, or ought to understand, that, in a very real sense, writing is thinking. They are not two separate activities. They are inextricably linked.

Indeed, one of the main ways we teach students to think is by teaching them to write—in their own words, in their own voice, engaging their own brains. Personally, I see no need to teach my students how to write like robots. They get enough of that in their high-school AP classes. Teaching them to write like real human beings—that is the challenge.

I alluded above to the fact that the swift and sudden advent of ChatGPT on college campuses was met with numerous pronouncements from on high. One of those, for me, came in the form of an email from my department chair, no doubt instigated by the dean and probably by the provost, informing us we were to include a “Statement on AI” in our syllabi. To their credit, those administrators didn’t tell us what the statement had to say or how we should approach the topic, just that we needed to let students know what we planned to do.

Fair enough. After giving the matter some thought, I wrote the following, which is now part of the syllabus for all my writing courses:

The main purpose of this course is to help you learn to express yourself, clearly and cogently, in your own unique voice: your thoughts and ideas, your emotions (where appropriate), your words. There is great value in that kind of authenticity, both personally and professionally. AI may be a useful tool for many things, but it cannot help you sound like the best version of yourself. It is also bad at following directions and tends to make things up, both of which can be grade-killers. For all these reasons, you MAY NOT use AI on any of your assignments in this course.

I try my best to structure the writing assignments so you can’t simply turn them over to ChatGPT. But of course I don’t always succeed, and clever students can often find a work-around. (Why they don’t just apply that cleverness to the assignments, I’ll never understand.) If I can prove that you used AI—and there are programs to help with that—you will receive a zero on that assignment. If I can’t prove it, but the writing sounds robotic—whether or not you actually used AI—you will almost certainly receive a lower grade than if you were writing in your own voice. (I’ve been reading essays that sounded like they were written by robots since long before AI came along. I refer to that as “AP Syndrome.”) A big part of what I’m trying to teach you is how to write in such a way that you sound like an actual, intelligent, unique human being, with personality, experiences, passions, and opinions, and not like some soulless computer program.

Can I actually prevent students from using ChatGPT or any other form of AI? Probably not. But through a carefully curated combination of teaching, encouraging, cajoling, a little bit of bluffing, and continually fine-tuning my assignments, I can at least make it more difficult for them to simply outsource their writing or thinking to the hive brain.

If that makes me old-fashioned, outmoded, shortsighted, hidebound, intransigent, uncool, or a stereotypical “Boomer,” so be it. I will always believe that my job is to help students learn to cultivate their own intelligence, not rely on the artificial kind.

So, hey, ChatGPT? Get off my lawn.

This article appeared first on Brownstone Institute under a Creative Commons License (CC BY 4.0).

Image credit: Unsplash

3 comments

Leave a Comment

Your email address will not be published. Required fields are marked with *

3 Comments

  • Avatar
    faith I kuzma
    February 27, 2024, 3:45 pm

    "Every writer understands, or ought to understand, that, in a very real sense, writing is thinking. They are not two separate activities. They are inextricably linked." AI writing is bland and propagandistic, less interesting even than the "automatic writing" / free writing trend in teaching writing during the 80s….

    REPLY
  • Avatar
    Daniel Dal Monte
    February 27, 2024, 4:27 pm

    My feeling was that certain elites actually wanted to generate hype about chatgpt. It was not an organic hype. For some reason there was actually a desire to advertise and make people aware of A.I., by portraying it as some terrible crisis.

    REPLY
  • Avatar
    Jay Brown
    March 19, 2024, 9:41 pm

    As a fellow teacher you have captured my thoughts exactly. I work with many teachers who sing the praises of AI and use it to do their "thinking". It is pretty clear to me where this is heading for teachers.

    REPLY

Posts Carousel

Latest Posts

Frequent Contributors