Corporate Exit
default workAnother One
I tried to work in corporate tech again. It did not last long.
I’ve been in the tech industry a long time. I’ve adapted to every technology change and pendulum swing. That is easy. It’s the things that stay the same that are hard. The group politics, the cliques, the territoriality and protectionism, the micromanagement, the cargo culting and trend chasing… these just seem worse over time, and I reacted to them the same way that an immune system reacts once it’s been sufficiently exposed to a pathogen or allergen: for my self, my sanity, and my value, I quit.
It’s Slop All The Way Down
People have made a lot of noise blaming AI for ruining tech. This is like blaming the latest leaders for ruining the country. These things are just the ultimate expression of things that have always been there. Both are Frankfurt-style capital-B Bullshit generators. In the case of tech, in the service of Graber-style Bullshit Jobs. LLM slop is created from a text corpus created by humans. The same humans who made careers on writing bland word-salad corporate strategies, copying and pasting from StackOverflow and inventing or pledging to one priesthood after another (XP, ITSM, DevOps, NoOps, SCRUM, Agile, SRE, GitOps, etc). The same humans going “all-in” on adopting LLMs as “partners in thinking” on their “AI journeys.” The AI industry did not create regression to the mean, it just made it ruthlessly efficient. The industry has been full of bullshit for years; it’s just weapons-grade now… concentrated for easy use by your engineering org’s local petty tyrant. LLM waste has a demand-side component.
On Context
One of the things you get from working at a place for a while is context. The same thing we’ve contorted our whole society to feed to the LLMs in a mad dash to make them do our work for us. Context, not coincidentally, used to be one of the primary tools of artificial job security. You all are familiar (or perhaps exemplified) the type of colleague who would hoard local knowledge to become the go-to person… always willing to fix the issue but never motivated to solve the problem. Well, now you are training your replacement whether you like it or not. Companies that used to protect their trade secrets (code, documents) with an almost feverish paranoia are now willingly giving away everything to just a few third parties. Why?
The anticipated payoff is reduced labor costs. Everyone is doing this because they think it’s inevitable, or because they think their job is safe if they are the slop arbiters, the Claude Skill “authors”, the person who has pivoted to doing the work “only humans can do.” Only the people who were already bad at software engineering will be left out on this view. Every human needs to be a senior or better; the LLM is the junior. It will soon be the senior too, as people get worse at actually doing the job. This leads to people scrambling to be the “ideas people” who will feed implementation requests to the LLMs and guide them architecturally (they are still very bad at software and system architecture AFAICT).
So the headcount available for real engineers is getting smaller and smaller, in the short term at least. It seems obvious that AI can’t replace humans entirely, not at the quality you’d hope that humans would provide. But shareholder value is the goal, not quality. Large tech companies have already shown us that they will degrade quality to whatever degree they can as long as the stock price is not directly affected on the timescale of financial quarters. They have gotten away with this by “embrace and extinguish” means. Consumers have little choice. This is what is now called enshittification but was a well-worn strategy long before the term was coined.
Smaller companies are also effectively training their larger acquirers. We’re now in a world where we are giving those same companies (or same sorts of companies) deep, granular insights when before they could only peek at indirectly through user surveillance on their SaaS, cloud, and ad platforms. Now they have slides, businsess plans, financial estimates, code, ideas, meeting notes, emails… all of it tokenized and rendered into highly predictive and proprietary market intelligence. In my estimation, the businesses handing this data over in the hope of getting a cheap automated labor force are simply serving themselves up as corporate organ donors.
Giving up context like this is a huge gamble. So is the scale of investment and subsidy. But as usual, if it’s a losing bet, the penalties will be felt most by the folks who can’t afford the negative consequences and shocks from this: workers and the taxpayers. The owners and leaders will be fine. Even in the worst AI market catastrophe, Sam Altman and the Amodeis are going to be left with a giant pile of money. Tech workers will be set adrift and looking for new careers… at best.
How This Works on the Ground
All of this behavior is laundered through corporate mission statements and CEO speeches about leaning into new paradigms and what not. So how does it feel when you’re in a corporation? In my opinion, it already feels like you are the LLM. The people who have internalized this mission own the context. They will feed it to you in the form of an assignment or a task; it might have a traditional label like “spike” or “ticket” or “design” or whatever.
But then you do the work and submit it for review. Your operator’s face scrunches up. “This isn’t exactly what I had in mind.” You do your best to accomodate. “Well no, not here, just in this part.” You do it again. The operator, still frustrated that your work doesn’t look like the work he would do, points you to the coding standards doc or whatever, in the expectation that you will abide this document as a form of preamble to all of your future efforts so that this back and forth can be avoided.
It’s a CLAUDE.md. People will literally give you a system prompt as if you were a fucking machine. When the layoff ax gets swung again, they are betting that the company will keep more of the ideas people who fed context to others than the people who merely coded to spec. After all, an LLM can do that. And that seems like a good bet in the short term. The gap between the context haves and have-nots has become even wider, as has the prestige gap between “thought leaders” and implementers.
Again, this did not start with AI, the AI industry just mechanized it and started selling it as a SaaS. The product was created to meet a need, a tendency, a behavior that was always there: the micromanager, the taskmaster, the petty tyrant who has collected all the Right Answers and sees people as fungible machines to implement their designs. The good ones are called “talented” or “skillful” and those that don’t acquire less pleasant names. Manager and control-freak types defined the interface a long time ago; OpenAI and Anthropic productionized it.
If you’ve been living this way for years already, it probably feels normal; you could hardly be blamed for not noticing it or realizing that things could actually be a lot better for you.
Perspective Beats Context
After some months outside of the industry, touching actual grass, and working on my own technology projects, I found this all jarringly intolerable upon returning and quit this AI-era job almost immediately. But there are people still there; good people, smart and creative people, living and working among the LLM-pilled taskmasters and tyrants. They’ve endured the depersonalization and the prompting and the constant threat of replacement and earned enough context to be evaluated as “skillful” and productive, and keep plodding on for the salary. But the value of context has a shelf life when your job consists almost entirely of shoveling it into the slop machine. In place of fleeting context, I wish a change in perspective for them. It’s one thing to be told things suck, it’s another to realize it. I read Microserfs when it came out; I still remained committed to a tech career. I had to experience the pain first-hand, then I had to experience life outside of tech for a bit. Only then did I really realize just how miserable it makes me.
I suppose I will have more trouble making ends meet now, but I think that by leaving modern corporate tech nonsense behind I have a better chance of maintaining my skills, motivation, mental sharpness, and of rebuilding my sense of self-worth after years of being hemmed in by constricting and often pointless knowledge work. I think if enough people realize what is actually going on (and has been going on), things could change for the better. But I’m not sticking around to find out. I’m betting – maybe naively, maybe against oall hope and evidence – that someday in the near future, the tech industry will realize it needs people who maintained their code and systems literacy.