“I went into the woods because I wished to live deliberately.” So goes Thoreau’s famous declaration from Walden. We all more or less know that book, and what its central project symbolizes, but perhaps not thought much about it. But maybe we should. Thoreau captured a sentiment more important now than ever, when AI is making us live less deliberately than we ever have. We’re increasingly trading the identity-forming experiences that require difficulty, time, and judgment for a life algorithmically formulated without risk and friction.
We make this trade when we outsource intellectual, romantic, and personal endeavors to systems designed to choose for us. This trade is even more pernicious upon examination than at face value, and it’s something I’ve been grappling with recently.
The first dangerous area in which we make this trade is writing. That seemingly tame little task of putting pen to paper is actually quite a risk-laden activity, especially when it’s part of a sustained intellectual process. Writing is an external manifestation of thinking: when we write we are representing ourselves, our knowledge, and our emotions to an audience, a possibly daunting notion. Despite the fact that the very intellectual risk is what makes an exercise so rewarding, the rise of generative AI is presenting a seductive counter-ideal: doing the hard part for us, so we barely have to think at all. So we barely have to think about ourselves at all.
Now, with AI at hand, a tenth grader who once had to research Napoleon’s defeat at Waterloo organically, then communicate what he has learned (a potentially intellectually challenging week of work, but a rewarding one) can type in “Write me a 500-word essay answering the question What were the factors that contributed to Napoleon’s defeat?,” and have said essay within two minutes. Copy, paste, sorted. On to Fortnite and doomscrolling.
Now, yet another development emerges. Companies like OpenAI are apparently planning to introduce paid ads into their generative responses. So, if the tenth grader doesn’t read over what the bot wrote for him and make some edits, then perhaps between the sentences, Napoleon mistakenly tried to overexpand his territory, and Napoleon’s 1812 invasion of Russia was a disaster, there could very well be a plug for DraftKings. His teacher, while grading, will laugh to keep from crying.
Another risk-laden activity that is close to necessary, especially for adolescents, is dating. Online dating has a more extended history than AI—by now, it has become not only a helpful tool largely used by older individuals but the de facto way for young people to find someone.
Digital algorithms worming their way into ‘real life’ are mostly banal and harmless when they aid us to occasionally what to wear, buy, and consume. They can even be helpful, assisting us in finding clothing or television programs that match our tastes. I would go so far as to say that using AI for schoolwork isn’t a major transgression if the work is cursory and rote, like a math problem or a chemical formula. Of course, context matters, and one hardly wants the next great class of physicists and chemists to be reliant on AI—getting “reps” matters, and AI interrupts that workflow.
But the humanities are where it gets tricky, and emotionally complex aspects of ‘real life’ get even trickier. The normalization of algorithmically-produced writing is a bad thing, point-blank, and I don’t just say that as a writer but as a person with a vested interest in the marketplace of ideas. Dating, sex, and love? Forget it—AI mediating human relationships is nothing short of ontologically insidious.
It’s important to remember that these platforms are literally built to sell to us; they are not neutral, they are built and operated with an agenda and intention. The less we pay attention to this fact, the higher the risk that our intellectual and romantic lives quietly morph into markets controlled by an invisible hand beyond our power of understanding—corporations getting involved in this technology isn’t its own issue but instead a glaring neon sign highlighting what’s going wrong. Thus, the introduction of advertisements into these services is really a mask-off moment, revealing the nefarious corporate influences that lurk behind our use of online platforms. There is something fundamentally wrong with advertisements mingling with real knowledge, research, and information. Streaming and chatbot services may be more pedestrian sources of information about the world, but they are still widely used. It is not beyond the pale to think that more ‘serious’ research and academic online resources might be the next victims of corporate influence and a need to respond to a culture of instant gratification—something like JSTOR hosts publisher-specific ads, but no glossy banner ads (yet), but the troubling part of the digital intelligentsia is how a money-grubbing bad actor can morph and corrupt any archive with the click of a button. (This is why maintaining physical media, and especially libraries, is so important, and we’d benefit from listening to Luddites more.)
The influence of corporate capitalism has remained a troubling affliction on dating apps since their inception. Humans shouldn’t really view their prospects by swiping through profiles on a glossy screen sandwiched between advertisements for Jet2Holiday. It is quick stimulation diluted to its quickest, its apex, and one must be a hardline corporate libertarian to find this morally acceptable.
Outsourcing our lives to algorithms means treating ourselves as products, made natural by the slow introduction of our intellectual and romantic lives into the same stream as ads.
The stimulation one receives from producing a well-researched and well-written essay imbued with one’s own perspective and way of communicating now intertwines with the stimulus of consuming an advertisement for a potentially exciting product. When one swipes through dating apps, the stimulus of the exciting world of romance and communication becomes similarly conflated with consumerism. Knowledge becomes a product, and so do other people. With Generation Alpha using these services from a young and impressionable age, usually frequently and often obsessively, this productification will be their reality. It will shape their existence. The effect of deep commodification is one in which dates and other life events are formulated in advance due to algorithm mediation and therefore reduced to consumables. The matter isn’t simply being exposed to more ads. The matter is the idea of humans packaging our desire in gamified, clickable forms, subliminally influenced by the ads which have become an irreducible component of the platforms that arrange our lives.
Younger generations risk missing out on the uncertainty and risk-taking that is so important in adolescence. Humiliation is a fact of life. It is sometimes the unwanted child of uncertainty, of putting oneself out there without guarantees. It also teaches us that indignity is survivable—what’s more, weathering humiliation strengthens our tolerance for more of the guarantee-free endeavors I’ll term as authentic risk.
Sharing a heady piece of writing or going on a first date with someone we’re unsure about can be terrifying, and these ventures often fail, but it should precisely be these great failures and victories alike that shape a person and prepare them for productive adult life. When uncertain encounters transform into algorithmically safe transactions, these apps are at terrible fault, since they are specifically designed to work to minimize genuine surprise through careful curation. Hinge apparently uses a Nobel-Prize winning mathematical formula (the Gale Shapley algorithm) designed to optimize matches with whom the algorithm deems as your most compatible prospects. I believe in an alternative vision in which each adolescent gets the privilege of experiencing the gratification of personal success on their own non-mathematical accord.
Uncertainty is the keyword here, as it is the origin of AI’s preeminence. Some may argue that AI is born of laziness, but I reject this notion—we are no lazier than the generations that preceded us. We just cannot cope with the uncertainty of success, especially when systems are in place to significantly reduce it. An AI-produced essay may not be great, but if you prompt the bot strategically, it will almost certainly be fine. Dating apps like Hinge, for their part, use algorithms to recommend people they think you will like. Going by Hinge’s book may prevent you from experiencing a great love (the greatest loves are often with those we don’t expect), but your optimized match is a surer bet than expending epistemic and emotional energy to pursue someone you find organically enticing.
To navigate our intellectual and emotional lives, using algorithms is easier and more time-efficient than relying on organic human capability. Many understandably also think algorithms produce better results. ‘Better’ as a qualitative term is difficult to assess, but here it may be a misnomer for ‘safer;’ a result that succeeds based on rather technical metrics rather than one’s unique judgment. A ‘tighter’ essay isn’t automatically a ‘better’ one. A ‘messy’ human output that doesn’t neatly fit the system is a substantially rich one. It is more in step with personal and general advancement than an app-produced result which provides little incentive to push the envelope, since its result is already ‘good enough.’ These algorithms encourage complacency. Systems choosing for us may be efficiency but it’s also a suspension of deliberate effort.
It’s telling that these algorithmic platforms are so widely used for the aforementioned purposes because it confirms that younger generations primarily care about the same things as older generations: academic, romantic, and social success. Instead of trodding our own paths in these domains, however, we outsource.
We must realize the terrifying reach of our inclination to outsource. Using AI is not a self-contained venture: it’s an exercise in social and economic participation. Corporations are engineering and optimizing this technology, and further, other corporations are paying them to run their ads and push their influence. We think we are simply messaging Chat-GPT or the boy on the other side of our dating app software, but we are also in conversation with the many corporations eager to use our data and influence us to buy their products.
A significant motif in the film The Truman Show is Truman’s “wife” (a paid actress) breaking from her interactions with him to address an invisible camera and try to sell a product. Truman cannot have a typical life or marriage without coexisting with someone who uses his real (to him, anyway) human experiences to shill for companies. I imagine the corporations paying for their ads on our algorithmic platforms are much the same. We now interrupt your quest to get a date or degree to recommend you buy a Netflix subscription. We end up, in a minor sense, like Truman, in that our ‘real lives’ start to melt into the corporate overreach and consumerism that meddle in the tools we employ to live more easily. (Even if in this sense we’re basically a citizenry of Trumans who are being voyeured by technology instead of each other.) A viewer may wonder whether Truman has any genuine control over his life. If we let algorithms decide our most every move, do we?
I am sympathetic to the neurosis that such a question can inspire, especially as I’m addressing a readership of high-achieving students, many of whom aspire to corporate careers in which algorithmic decision-making is pervasive at the macro level. An investment manager uses formulas to understand where it may be profitable to allocate money. A politician uses cost-benefit analysis to decide whether passing a bill is a good idea. The whole job of a consultant is that of a walking algorithmic decision-maker (though perhaps more morally tolerable than AI because there is usually a level of human creativity that flanks the mechanical logic employed).
Now, what will happen when the already algorithmic nature of these positions becomes increasingly reliant on aforementioned digital algorithms? Young people suckled on the ease of having every decision mediated through their phone screens, from plugging in a destination on Google Maps to using AI shopping platforms to find an optimized lip gloss or gym bag, will find that their careers can fit nicely into this model. Now, there’s a world of platforms that actually make decisions for us instead of simply informing, and everyone has a personal butler, coach, and doctor at their fingertips on one device; omnipresent yet ultra-controlling; technology that acts simultaneously as master and servant. The more comfortable familiarity, the more glaring red flags slip under the radar. I have used ads as an example of the inconspicuous quirks that manifest in our work and lives when we become overreliant on AI. Soon, the teenager whose Napoleon essay unwittingly morphs into a gambling shill can become a lawyer whose briefs unwittingly morph into gambling shills, especially as corporations find shrewder ways to advertise. I fear that this productification of ourselves and our personal and epistemic lives can become so pervasive that there will be nary a human venture that doesn’t become imbued with the stain of heedless product capitalism.
Writers often mention the death of originality as a major problem in the age of AI, but the ascendency of consumerism is rising as a major concern as AI invades every aspect of our lives and corporations realize they can use this pervasiveness to their benefit. They can leverage real psychological dependence on AI to foster psychological dependence (or at least a vested interest) in their products. Wasn’t it hard enough to have to deal with trying to succeed academically and romantically as well as navigate a hyper-capitalist culture without having the two deeply intertwined?
The availability of systems that think for us makes using our own faculties to navigate life sometimes feel unnecessary or even eccentric. So we become complicit in letting AI remove our deliberate living. It’s worth remembering that to use one’s own faculties to tackle the complex parts of life requires a conscious commitment, an idea which Thoreau took to its extreme by literally removing himself from the influences of modern life and moving to Walden Pond. If that’s what it took for him to live deliberately in a time where his neighbors didn’t live in the grasp of a computer in their coat pockets and were pretty much sick on blue-light-produced-endorphins, it will likely take a lot more than that r to pull off the same experiment today.
We must resist AI on a personal level because it has infected us on a personal level; its infected everything from daily chores to academic, intellectual, and dating lives with its impetus to outsource. We can start choosing small arenas to tackle slowly and deliberately, even if we can’t curb every shortcut at once. A fitting place to begin is the action that is the start of many significant human developments: putting pen to paper.
Working for yourself and turning in work for judgment and criticism is both difficult and euphoric. If we all tried creating original work more often, we would ease into greater comfort with risk-taking and uncertainty. Any success achieved through this work would be an empowering bonus, but it wouldn’t be the point on its own. Soon, a newly minted writer may feel empowered enough to ask out the guy who goes to the same coffee spot as him, and little by little, one can craft a life totally apart from what AI thinks we should write or who we should date and shake free the threat of nefarious consumerism as a result. We would learn to live deliberately.
This message of living deliberately is one paramount for younger generations unsure of life beyond their screens. As AI dependence proliferates throughout a person’s life in their most impressionable stages, I imagine it is like any other addiction: harder and harder to shake with every passing year. Gen Z is already in the thick of it, but a gentle return to personal decision and privacy in some aspects of life is better than nothing. Let’s, as Duke students, work on rediscovering the joy and fulfillment of rejecting a chatbot’s romantic advice in favor of a late-night talk with friends over wine, choosing a class based on our judgments and feeling without having AI glance over DukeHub, and even simply playing around with the equipment at Wilson in lieu of a digitally optimized workout. Maybe we’ll end up with strange conversations, experiments in judgment, or even formative failures.
We can only hope.
Great intellect, great romance, great creativity, and great healing lie at the other side of the seductive chasm we call AI, and the bottomless pit of living, you could say, un-deliberately. Crossing that chasm isn’t easy, especially when the temptation of AI is so ever-present—and it’s harder for the risk-averse, the decision fatigued, or the addictive personality. I will do my part to try and transition back to little ways of self-reliance, hopefully without having to go full Walden—but hey, if you see me around campus chopping firewood and hand-washing my clothes in Duke Reclamation Pond, you’ll know where my mind’s at.
by Cara Eaton





