LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.
It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?
To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.
I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.
I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.
That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.
But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.
It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).
So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)
I usually work in branches in a private repo, squash and merge features / fixes in the private repo, and only merge the clean, verified, extensively tested merges back to public.
You don't need to see every single commit and the exact chronology of my work, snapshots is enough :)
Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
> Every line was thoroughly reviewed and cross-referenced with relevant RFCs
The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.
When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.
I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s
DevSecOps Engineer
United States Army Special Operations Command · Full-time
Jun 2022 - Jul 2025 · 3 yrs 2 mos
Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.
Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.
This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.
Considering how many times I've heard "don't let perfection be the enemy of good enough" when the code I have is not only incomplete but doesn't even do most of the things asked (yet), I'd wager quite a lot
I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.
Professionalism at its finest!