ChatGPT, Strollers, and the Anxiety of Automation
Last fall, I published a book about strollers and what they reveal about our attitudes toward children and their caretakers. Although I pitched Stroller as, in part, a critique of the consumer culture of contemporary American parenthood, I came to love my (many) strollers. In the years when I routinely ran while pushing my kids ahead of me in our jogging stroller, I recorded race times faster than I had as the captain of my college track team. In the long, claustrophobic early days of the pandemic, my son and I meandered slowly up and down the sidewalks of our neighborhood watching that late, cold spring come to New England. Often, at the end of a long stroller walk or run, my kids fell asleep, and on warm days, I’d park them in the shade and myself in the sun to work while they slept, feeling a proud mix of self-sufficiency and frugality (no childcare needed to run or meet a deadline).
In the months after my book came out, friends and family sent me pictures of themselves pushing strollers in iconic places (the Brooklyn Bridge, a protest in front of the Supreme Court, Buckingham Palace) as though to say: Here I am living an adventuresome life with my children right alongside me. In my inbox I had photos of a fleet of UppaBaby Vista strollers outside the 92nd Street Y, a suburban garage filled not with cars but with strollers, movie clips of runaway prams, and, more than once, stories about self-driving strollers. One video clip from my husband’s cousin showed a woman jogging, swinging her unencumbered arms next to a stroller while it matched her pace. To that one, I responded with a quick line about how much faster it would be to run without having to push the 100-plus pounds of my Double BOB.
That kind of casualness was a relic of a time before my inbox started to fill up with another flurry of emails, this time about ChatGPT. I taught high school English for many years and now teach freshman composition, so news about the new—horrifying, amazing, fascinating, or dystopian, depending on how one sees it—large language models, and their role at the nexus of writing and teaching, often made friends and family think of me. Because everyone has a wealth of (often fraught) memories about their own high school years, and because many of my friends now have children around the age of the students my husband and I teach, we end up talking about work in social contexts fairly often. Just how stressed out are the high school students enrolled in multiple AP classes? Are our students’ weekends like an episode of Euphoria or even—and this would be alarming enough—more like what our own adolescent parties were in the late ’90s? What do we wish our students were better equipped to do? How do we keep them off their phones in class? And, most recently, as news about ChatGPT swept through increasingly wide rings of society, I began to get questions that were not so different than those that accompanied the emails about self-driving strollers: What are we going to do about life as we know it being changed by automation?
It was from my husband that I first heard of ChatGPT. He teaches high school physics and computer programming, and so its implications in the classroom were on his radar long before my colleagues and I in the English department had even heard of it. “Soon,” he told me, “everyone is going to be talking about this.” He was right of course, but that first night over dinner, it was easier to dismiss his predictions as alarmist or the niche concerns of computer programming teachers.
My initial response was to insist that there are important differences in how easily AI might produce work mimicking student code as opposed to essays. But what I couldn’t dismiss was a concern much broader than the assignments either of us might give or the implications for our specific students: the ethical and philosophical implications of the program itself. Instead of being built around if-then commands, Nick explained, ChatGPT is a neural network. What is it then, Nick asked me, that makes those neural networks that comprise ChatGPT different than our biological network of neurons? The fact that they’re silicon instead of carbon-based? Why would a carbon-based network allow consciousness to develop and a silicon-based network not? How, he asked, could eight extra protons make all the difference? Nick’s line of thinking was almost intolerable to me. Of course, I insisted, there is something beyond carbon—perhaps something we can’t put into words or even prove exists—that makes us human. And though I pointed to emotions and connections and relationships, I could not articulate quite what that human-making something is.
Unlike strollers, which I will happily discuss all day, I hate talking about ChatGPT, and yet I find myself doing so all the time, and often because I am the person who’s brought it up.
At the beginning of the spring semester, I posed a metaphor for my students to consider: Wasn’t using ChatGPT to complete a writing assignment (without acknowledging having done so) like going to the gym, setting the treadmill at 10 mph, letting it run for 30 minutes, taking a photograph of its display, and then claiming to have run 5 miles at a six-minute pace? It might appear to have happened, and the student, in a very passive way, would have been responsible for bringing the illusion to life, but the student would be no fitter or faster than when he or she had begun, or than the student who’d run one or two minutes at a six-minute pace or 5 miles at a comfortable jog.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.