Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...

Professionalism at its finest!





LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.

It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?


I also use this as a simple heuristic:

https://github.com/nkuntz1934/matrix-workers/commits/main/

There exist only two commits. I've never seen a "real" project that looks like this.


To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.

I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.

I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.

I have a similar process. Internal repo where work gets done. External repo that only gets each release.

The repository is less than one week old though; having only the initial commit wouldn't shock me right away.

That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.

But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.

It was/is quite common for corporate projects that become open-source to be born as part of an internal repository/monorepo, and when the decision is made to make them open-source, the initial open source commit is just a dump of the files in a snapshotted public-ready state, rather than tracking the internal-repo history (which, even with tooling to rebase partial history, would be immensely harder to audit that internal information wasn't improperly released).

So I wouldn't use the single-commit as a signal indicating AI-generated code. In this case, there are plenty of other signals that this was AI-generated code :)


I usually work in branches in a private repo, squash and merge features / fixes in the private repo, and only merge the clean, verified, extensively tested merges back to public.

You don't need to see every single commit and the exact chronology of my work, snapshots is enough :)


I might just make dummy commits ("asdadasdassadas") in the prototyping phase and then just squash everything to an "Initial commit" afterwards.

Oh wow I'm at a loss for words.

To the author: see my comment at https://news.ycombinator.com/item?id=46782174, please also clean up that misaligned ASCII diagram at the top of the README, it's a dead tell.


Yeah deleting the TODOs like that is honestly a worse look.

Incoming force push to rewrite the history . Git doesn't lie!

I wouldn't put it past them...

I wouldn't put it in past tense...

Reminds me of Cloudflare's OAuth library for Workers.

>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security

>To emphasize, this is not "vibe coded".

>Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.

...Some time later...

https://github.com/advisories/GHSA-4pc9-x2fx-p7vj


What is the learning here? There were humans involved in every step.

Things built with security in mind are not invulnerable, human written or otherwise.


Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.

This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.

Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)

And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?

This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.


This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.

If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.


the problem with "AI" is that by the very way it was trained: it produces plausible looking code

so the "reviewing" process will be looking for the needles in the haystack

when you have no understanding, or mental model of how it works, because there isn't one

it's a recipe for disaster for anything other than trivial projects


The learning is "they lied". After all, apart from marketing materials making a claim, where is the evidence?

Wait, we think they’re lying because an advisory was eventually found? We think that should be impossible with people involved?

Reading the necessary RFC is table stakes. Instead we got this:

>"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"

>"haha gpus go brrr"

(Those lines remain in the readme, even now: https://github.com/cloudflare/workers-oauth-provider?tab=rea...)


To me it's likely, given the extremely rudimentary nature of that issue.

If you're asking in good faith,

> Every line was thoroughly reviewed and cross-referenced with relevant RFCs

The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.

When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.


I mean, you can't review or cross reference something that isn't there... So interpreting in good faith, technically, maybe they just forgot to also check for completeness? /s


https://www.linkedin.com/in/nick-kuntz-61551869/

DevSecOps Engineer United States Army Special Operations Command · Full-time

Jun 2022 - Jul 2025 · 3 yrs 2 mos

Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.


Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof.

Maybe, but the group of people they are/were working with are Extremely Serious, and Not Goofs.

This person was in communications of the 160th Special Operations Aviation Regiment, the group that just flew helicopters into Venezuela. ... And it looks like a very unusual connection to Delta Force.


Considering how many times I've heard "don't let perfection be the enemy of good enough" when the code I have is not only incomplete but doesn't even do most of the things asked (yet), I'd wager quite a lot

I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.

Covering it up for sure. We all make mistakes. We all make idiots out of ourselves. But you have to take ownership and own up to move on.

Covering it up changes it from being dumb to being deceptive


Wow this is definitely not a software engineer. Hmm I wonder if Git stores history...

they actually rewrote the history later, but github shows force push history too https://github.com/nkuntz1934/matrix-workers/activity?activi...

No more vulnerabilities then I guess!

they should have at least rebased it and removed from git history

Hilarious. Judging by the username, it's the same person who wrote the slop blog post, too.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: