Content warnings: death, suicide, CSAM, ecological and economic collapse

I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched.

February 5, 2026. ChatGPT told another old friend to take their life. Someone intervened. Others haven't been so lucky.

I go to class and see hundreds of people with nothing open but a dialog box. In their 2024 paper, Aylsworth and Castro argued that to outsource your critical thinking to a computer is to fail to treat yourself with dignity. It's principally a harm done to yourself. Others, such as Laura Gorrieri (2025), have countered that it's not clear how things like writing essays intrinsically support the cultivation of one's own humanity, but that central idea feels harder to refute. And equally so, it's hard to see how the goal of large language model providers is anything but the sacrifice of one's own autonomy.

And yet, people are still asking me to take up prompt engineering. Or they're telling me not to. Or they're throwing up their hands and fatalistically accepting that this is the new normal, that we all ought to find ways to live alongside it. Unless it's Chinese.

And I'm left with this unrelenting feeling that something's dying. If it's not me, after my AI girlfriend invites me to eschew my mortal form and join her in the astral plane, and it's not Adam Raine, or Sewell Setzer III, or Stein-Erik Soelberg, or Alex Taylor, or Thongbue Wongbandue, or the people who didn't make it into the news cycle, who took their stories to the grave, or the many others yet to come, then maybe it's something much bigger. Something so big that it's hard to be upset about any individual case of boosterism, because none of us, no matter how cynical, doomer or techno-pessimist we are, are ready for it.

And I don't think it's the singularity we should be preparing for.

I think that Copilot might be the peak oil of high technology.

I.

I don’t want “vibe coding” to become a negative term that’s synonymous with irresponsible AI-assisted programming either. This weird new shape of programming has so much to offer the world!

I believe everyone deserves the ability to automate tedious tasks in their lives with computers. You shouldn’t need a computer science degree or programming bootcamp in order to get computers to do extremely specific tasks for you.

If vibe coding grants millions of new people the ability to build their own custom tools, I could not be happier about it.

A friend and I went out for drinks the other night. When I drink, I say things I've grown used to not saying. The thing de jour was the Grok CSAM scandal. A former titan of social media, now a place where people have tacit permission to generate child sexual abuse material in public. The old digital town square. Twitter has long felt like the rotting carcass of an old mode of social interaction, but I don't know. I haven't been on Twitter for many years.

I was so angry I was almost in tears, but the strange thing was that after I said it out loud, that feeling was gone. How strange is it, really, that the website formerly known as Twitter is now equipped with tools to generate child pornography, in public, on demand? In the context of everything else? At a certain point, it all sort of just blends together. It feels normal.

Earlier today I completed a survey asking me for my input on how I use generative AI as a student. It felt strange. The primary applications of generative AI seem to be pretending to have friends, cheating on homework, creating unsolicited marketing material, CSAM… and vibecoding. The University would never survey me on how I use Chegg or CourseHero. Now, they seem to be contorting themselves to come up with legitimate, productive use-cases for as of yet unknown purposes. Surely it can't be anything good.

It's… disorienting. The technology gets worse. The applications get worse. The companies get more desperate. But surely this can't go on forever. Copilot has to be the peak oil of high technology. Any generative AI that survives the crash will need to overcome both physics and the end of the world.

II.

Conversely, some have decided that they not only oppose LLM use for themselves, but wish to out or otherwise shame those that are making use of LLMs. Aside from the impracticalities of this position (e.g., as LLMs become the foundation for technologies like search, it will become increasingly difficult to attest to one’s own purity), such strident opposition undermines teamwork.

In this regard, LLM use may be viewed as a dietary choice: one may choose to (say) not eat meat — and at Oxide we wish to empathize with that disposition by making sure that vegetarian options are available when we eat together. But just as we accommodate those choices, those who make them must understand that others will make different decisions — and it is decidedly anti-social to interrupt someone’s meal to register disapproval with their choice of entrée.

There's this persistent notion I keep running into, this idea that whether or not I like it, there's something about generative AI that's "inevitable." I keep trying to write about it but I'm struggling. I think it ultimately boils down to the fact that it's fatalistic. It lacks creativity. It's uninspired.

I think it's pretty easy to imagine a world without generative AI. On an individual level, I'm living it every day. But I don't think that's what they're getting at. It's inevitable because it doesn't matter how many true believers there are like me. True believers alone can't move the needle on a PR campaign backed by sovereign wealth funds and Jeffery Epstein associates.

All the same, it's getting harder to imagine a world without generative AI every day. generative AI hasn't gotten all that much better at the core issues, and yet it's more passable than ever before. I genuinely struggle to identify a picture rendered by Nano Banana Pro unless it's of something I have a lot of domain-specific knowledge about. The pictures have gone from "kind of surreal, a little unsettling to look at" to being about on par with one of the major problems with LLMs: it seems great if I don't know anything about what it's doing.

But I have to keep imagining because it's just as inevitable that it'll all fail. Copilot embodies the violent, climatic explosion of technological infrastructure development that will ultimately bring down the global economy. Microsoft doesn't seem to have a day after plan, so it's people like us—the true believers—who'll have to step in to save the day. We need Copilot to be the peak oil of high technology.

III.

On the other side, we have the guardians. These are people who believe deeply that understanding code at a fundamental level is non-negotiable. They can spot inefficient algorithms, they know why certain design patterns exist, they understand the underlying systems well enough to debug problems that AI tools can't handle. They see the experimenters as shortcuts artists who are building on shaky foundations.

[...]

But here's what I think the guardians are missing: the world is changing faster than their gatekeeping can keep up with. The bar for "good enough" code keeps dropping while the bar for understanding users and building valuable products keeps rising. A slightly inefficient implementation that ships next week is often better than a perfectly optimized implementation that ships next month.

I can't convince myself that over half a billion people are regularly using ChatGPT. I've checked the numbers, and I've re-checked the numbers several times since, but it mainly boils down to two things. First of all, I've learned not to trust anything OpenAI or Sam Altman says about the state of their company, because in any case, it's never as good as it seems. And second of all, I simply cannot convince myself over half a billion people are regularly using ChatGPT. I don't want to believe it

To think of so many people trapped in an endless cycle of sycophancy, a recursive computer-induced delusion… is crushing. It's crushing to think of those numbers, and what they mean for our future. For my future.

A while back I was talking with a friend about something or other, and I made an off-hand comment about how I was having a hard time finding a replacement charger for my computer because they don't make them anymore. My friend scoffed and said I should just buy a new computer. In that moment, I had this feeling… this overwhelming feeling that my computer was dripping with the blood of every person who's been the victim of colonial violence in the computer manufacturing supply chain, of everyone who's died, who'll die next. I've spent a lot of time since then thinking about where that feeling came from.

I think part of the issue is that most people don't have a meaningful relationship with their computer.

If someone asked me why I don't like vibecoding, I'd probably tell them that it doesn't actually work, or when it does, it produces low-quality, buggy software. But even if vibecoding worked perfectly, I don't think I'd do it. If I was being honest, the reason why I don't like vibecoding is because the object of vibecoding—creating complicated software quickly—isn't what I'm here for. I write software because I feel like there's something inexplicably beautiful about the craft of good software, about the feeling of being immersed in a complex system and truly understanding how everything works. Something beautiful, I feel, is lost when you outsource it to an LLM.

In a strange way, many people are waking up to the experience of having a meaningful relationship with their computer for the first time. But rather than a relationship that comes from an honest pattern of interaction between two things that have fundamentally different ways of experiencing the world, it comes from a tortured voice box that always tells them what they want to hear.

A long time ago now, I wrote about this feeling of writing software for ghosts, or the ways in which our relationship with technology has been transformed by cloud computing. I don't know how well I captured the idea at the time, but the feeling never went away. That people don't interact with computing anymore, so much as they are haunted by it. It's a feeling that's only been reified by large language models.

But I can't believe that it'll go on forever. I can't choose to believe that Copilot won't be the peak oil of high technology.

IV.

And honestly, even if all that doesn’t work, then you could probably just add more agents with different models to fact-check the other models. Inefficient? Certainly. Harming the planet? Maybe. But if it’s cheaper than a developer’s salary, and if it’s “good enough,” then the last half-century of software development suggests it’s bound to happen, regardless of which pearls you clutch.

[...]

To me, the truth is this: between the hucksters selling you a ready-built solution, the doomsayers crying the end of software development, and the holdouts insisting that the entire house of cards is on the verge of collapsing – nobody knows anything. That’s the hardest truth to acknowledge, and maybe it’s why so many of us are scared or lashing out.

I need Copilot to be the peak oil of high technology, because I don't think there's a place for me in a tech industry where this is normal. Maybe there's never been a place for me in the tech industry. Maybe that's why so many people like me are looking for the door.

If I can't find a job in tech after I graduate that won't require I use a large language model, maybe I take out another student loan and go back to school. Maybe next time, I'll try social work. We're probably going to need a lot more of those in the years to come.


I am unironically a utopian who doesn't like to spread negativity without even a little bit of hope, but honestly, this is not an article that comes from a place of even a little bit of hope. I wrote this article because I feel like I've had something to say about generative AI, and it's taken me this long to find the right way to say it. This is an article for people who share this feeling of hopelessness, to remind them that they aren't crazy, and they aren't alone. All we can do is choose to believe that things will get better, do all we are able to do to make that world a reality, and come to peace with the fact that we still have a long way to go.

Respond to this article

If you have thoughts you'd like to share, send me an email!
See here for ways to reach out