These days, generative AI is all the rage: a buzzword in the business space, a cheat code to school and work, ground zero for a million think pieces about the future of humanity. It has gone from a niche tech curiosity to an everyday tool in record time. Platforms like OpenAI’s ChatGPT are being used for everything from drafting emails to writing code, making (often stolen) art, and even pretending to be your therapist.
Its rise has been meteoric, and so has the debate. Critics point to its massive environmental footprint, the exploitation of low-wage workers (especially in the Global South) who label and moderate its training data, and its potential to radically reshape the workforce, leaving many without decent livelihoods.
Amidst all the discourse, one thing is clear: whether we like it or not, generative AI is here to stay. The question isn’t if it will be part of our future, but how we will engage with it. What might a responsible approach to AI look like (and, is this even possible)? To engage with this question, we have to first understand its costs, limitations, and ethical implications.
AI and the environment
We tend to think about ethical consumption in terms of planes, cars, plastic straws, and disposable packaging, but technology is becoming a bigger share of the average person’s carbon footprint each year. The environmental impact of AI is significant: it’s estimated that a single ChatGPT query uses about ten times the energy of a traditional Google search.
The massive data centers that power ChatGPT and other popular models demand a staggering amount of electricity, most of it generated by burning fossil fuels. The surge in AI-driven demand is fueling a rapid expansion of these data centers, outpacing the growth of clean energy and driving a wave of new gas- (and even coal-) powered plants. The centers also require substantial volumes of water to cool the hardware used, putting strain on local water supplies and disrupting ecosystems.
The costs are not distributed evenly. Communities that are already marginalized often bear the brunt of this environmental damage, a classic story of environmental racism. In a historically Black neighborhood of Memphis, Tennessee, for example, Elon Musk operates a large supercomputer facility (which powers the chatbot Grok) that runs on methane gas turbines, a known source of air pollutants that contribute to asthma and other respiratory illnesses.
AI and labor exploitation
Generative AI is not magic; it requires massive amounts of exploited human labour behind-the-scenes. In 2023, reporting revealed that OpenAI outsourced content moderation to the San Francisco–based firm Sama, which employs contract workers in Kenya. These workers were paid less than $2 an hour to label data and filter toxic, graphic, and racist content used to train models like ChatGPT. Psychologically damaging work, performed for poverty wages.
AI systems also draw on the uncompensated labor of millions through large-scale data scraping. These models are trained on vast amounts of text, images, and other creative works, almost always without the creators’ consent. This practice raises unresolved questions about intellectual property rights: in most other contexts, using someone’s work without permission would be a clear violation, yet AI companies have largely avoided accountability. In 2022, several artists filed a class action lawsuit alleging that every image a generative tool produces “is an infringing, derivative work.” Their core complaint was simple: they did not consent, they were not compensated, and their influence was not credited.
AI and the future of work
As companies rapidly adopt AI tools, concerns are growing about their potential to radically reshape an already precarious labor market. Some professions, like voice actors, illustrators, and video game developers, are already being replaced.
Other experts argue that fears of mass job loss are overstated, predicting that AI will transform and redistribute work rather than eliminate it entirely, as has happened with previous technological shifts. Still, the short- and medium-term effects are damaging. AI tools are already being deployed to monitor and surveil workers, enforce productivity quotas, and undermine union organizing efforts.
In this sense, AI is not introducing entirely new dynamics but accelerating and amplifying longstanding patterns under capitalism—maximizing efficiency, cutting labor costs, and consolidating control at the expense of workers’ security and autonomy.
AI and Radish Magazine
Despite these enormous costs, AI is here, and it’s expanding fast. Pretending otherwise won’t protect us. The question is: how might we approach it in ways that align with our values, always keeping these harms in mind? There may be “no ethical consumption under capitalism,” but that doesn’t mean we can’t try our best.
The goal isn’t to be purists, creating moral litmus tests and writing off anyone who doesn’t pass (spoiler: none of us would, and this is part of a broader conversation that needs to be had about disposability culture in movement spaces). We can and should be kind to ourselves and others.
But we also need to recognize that short term convenience can have long-term consequences. Beyond the enormous societal costs already discussed, AI has enormous personal costs, fuelling anti-intellectualism and deepening alienation. What happens when we lose our ability to think and create independently? What happens when AI becomes our therapist, our lover, our friend? Sounds dystopian AF to me.
Let’s be clear that the real blame here lies with the massive tech giants fuelling the AI craze for their own profit. Still, the ways we choose to engage—or not—have ripple effects, especially when multiplied across our communities.
At Radish, we strongly discourage the use of AI for creative or personal writing (i.e. all of the pieces we publish!). The ideas and words on the page should come first and foremost from you. AI flattens your voice, and we can usually tell—AI-generated writing starts to sound the same, with the same cadence and tone. We also strongly discourage the use of AI for research purposes; these models are known to “hallucinate” sources and provide information that is incorrect or misleading. All that being said, we recognize there may be cases where AI can function as a tool for the more technical elements of writing: proofreading for grammar, suggesting ways to tighten a long sentence, or helping you phrase an idea you’re struggling to articulate. We generally do not accept any form of AI-generated art, but may make exceptions for artistic works that use the technology in creative or subversive ways. If you’re ever unsure about AI use in a Radish submission, reach out—we’re always happy to talk it through.
Beyond Radish, what follows is a practical guide-slash-quiz to help you weigh some of the considerations of your own AI usage. It’s not comprehensive, and, ultimately, it’s up to your judgment to use or not use AI tools in the ways that make sense for you. Just the next time you do, take a moment to consider the true cost of convenience.
Some overarching considerations that may provide helpful space for reflection:
- What are the unique contributions I want to make to the world? What kind of legacy do I hope to leave?
- How can I keep growing my own creative and intellectual capacities, and also contribute to building the collective capacity of my community?
- How can I embrace the messiness and beauty of being human, rather than reaching for quick, sanitized fixes? What stories has capitalism taught me about time, scarcity, and the need to optimize?