Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After re-reading the post once again, because I honestly thought I was missing something obvious that would make the whole thing make sense, I started to wonder if the author actually understands the scope of a computer language. When he says:

> LLMs are far more nondeterministic than previous higher level languages. They also can help you figure out things at the high level (descriptions) in a way that no previous layer could help you dealing with itself. […] What about quality and understandability? If instead of a big stack, we use a good substrate, the line count of the LLM output will be much less, and more understandable. If this is the case, we can vastly increase the quality and performance of the systems we build.

How does this even work? There is no universe I can imagine where a natural language can be universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it.

 help



You're going to pretty hard pressed to do Rust better than Rust.

There's minimal opportunity with lifetime annotations. I'm sure very small options elsewhere, too.

The idea of replacing Rust with natural language seems insane. Maybe I'm being naive, but I can't see why or how it could possibly be useful.

Rust is simply Chinese unless you understand what it's doing. If you translate it to natural language, it's still gibberish, unless you understand what it does and why first. In which case, the syntax is nearly infinitely more expressive than natural language.

That's literally the point of the language, and it wasn't built by morons!


I believe the author thinks of this problem in terms of “the LLM will figure it out”, i.e. it will be trained on enough code that compiles, that the LLM just needs to put the functional blocks together.

Which might work to a degree with languages like JavaScript.


That point makes no sense.

If the LLM is not perfect at scale - extraordinarily unlikely that it would be - then it becomes relevant to understand the actual language.

That's either natural language that's supposed to somehow be debuggable - or it's a language like Rust - which actually is.


@manuelabeledo: during 2025 I've been building a programming substrate called cell (think language + environment) that attempts to be both very compact and very expressive. Its goal is to massively reduce complexity to turn general purpose code more understandable (I know this is laughably ambitious and I'm desperately limited in my capabilities of pulling through something like that). But because of the LLM tsunami, I'm reconsidering the role of cell (or any other successful substrate): even if we achieve the goal, how will this interact with a world where people mostly write and validate code through natural language prompts? I never meant to say that natural language would itself be this substrate, or that the combination of LLMs and natural languages could do that: I still see that there will be a programming language behind all of this. Apologies for the confusion.

To be generous and steelman the author, perhaps what they're saying is that at each layer of abstraction, there may be some new low-hanging fruit.

Whether this is doable through orchestration or through carefully guided HITL by various specialists in their fields - or maybe not at all! - I suspect will depend on which domain you're operating in.


>After re-reading the post once again, because I honestly thought I was missing something obvious that would make the whole thing make sense, I started to wonder if the author actually understands the scope of a computer language.

The problem is you restrict the scope of a computer language to the familiar mechanisms and artifacts (parsers, compilers, formalized syntax, etc), instead of taking to be "something we instruct the computer with, so that it does what we want".

>How does this even work? There is no universe I can imagine where a natural language can be universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it.

Doesnt matter. Who said it needs to be "universal, self descriptive, non ambiguous, and have a smaller footprint than any purpose specific language that came before it"?

It's enough that is can be used to instruct computers more succintly and at a higher level of abstraction, and that a program will come out at the end, which is more or less (doesn't have to be exact), what we wanted.


If you cannot even provide a clear definition of what you want it to be, then this is all science fiction.

Doesn't have to be "a clear definition", a rough defition within some quite lax boundaries is fine.

You can just say to Claude for example "Make me an app that accepts daily weight measurements and plots them in a graph" and it will make one. Tell it to use tha framework or this pattern, and it will do so too. Ask for more features as you go, in similar vague language. At some point your project is done.

Even before AI the vast majority of software is not written with any "clear definition" to begin with, there's some rought architecture and idea, and people code as they go, and often have to clarify or rebuilt things to get them as they want, or discover they want something slightly different or the initial design had some issues and needs changing.


This is the most handwaving per paragraph I've ever seen.

I think a fair summarization of your point is "LLM generated programs work well enough often enough to not need more constraints or validation than natural language", whatever that means.

If you take that as a true thing then sure why would you go deeper (eg, I never look at the compiled bytecode my high level languages produce for this exact reason - I'm extremely confident that translation is right to the point of not thinking about it anymore).

Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point. Many many folks have lots of first hand experience watching it not be true, even when people are confidently claiming otherwise.

I think if you want to be convincing in this thread you need to go back one step and explain why the LLM code is "good enough" and how you determined that. Otherwise it's just two sides talking totally past each other.


>This is the most handwaving per paragraph I've ever seen.

Yes: "LLM generated programs work well enough often enough to not need more constraints or validation than natural language" if a fair summarization of my point.

Not sure the purpose of "whatever that means" that you added. It's clear what it means. Thought, casual language seems to be a problem for you. Do you only always discuss in formally verified proofs? If so, that's a you problem, not an us or LLM problem :)

>Most people who have built, maintained, and debugged software aren't ready to accept the premise that all of this is just handled well by LLMs at this point.

I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.

(I'm not saying it's good or good enough as a quality assessment. In fact, I don't particularly like it. But I am saying it's "good enough" as in, people will deem it good enough to be shipped).


> I don't know who those "most people are". Most developers already hand those tasks to LLMs, and more will in the future, as it's a market/job pressure.

This is definitely not true. Outside of the US, very few devs can afford to pay for the computer and/or services. And in a couple years, I believe, devs in the US will be in for a rude awakening when the current prices skyrocket.


The "whatever that means" isn't a judgement jab at your point, merely acknowledging the hand waving of my own with "good enough".

I hope this comment thread helps with your cheeky jab that I might have a problem understanding or using casual language.

I'm not sure if it's moving the goalpost or not to back away from a strong claim that LLMs are at the "good enough" (whatever that means!) level now and instead fall back to "some devs will just ship it and therefore that's good enough, by definition".

Regardless, I think we agree that, if LLMs are "good enough" in this way then we can think a lot less about code and logic and instead focus on prompts and feature requests.

I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.

I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.


>I just don't think we agree on what "good enough" is, if current LLMs produce it with less effort than alternatives, and if most devs already believe the LLM generated code is good enough for that.

Don't need to consider what they think, one can just see their "revealed preferences", what they actually do. Which for the most part is adopting agents.

>I use LLMs for a lot of dev work but I haven't personally seen these things one- or even many- shot things to the level I'd feel comfortable being on call for.

That's true for many devs one might have working for their team as well. Or even one's self. So we review, we add tests, and so on. So we do that when the programming language is a "real" programming language too, doesn't have to change when it is natural language to an agent. What I'm getting at, is, that this is not a show stopper to the point of TFA.


You do need a clear definition of what this “LLM as a high level language” is supposed to be. Otherwise it’s all just wishful thinking.

“It’s good enough” so it generates apps that could otherwise be boilerplate. OK, I guess? But that’s not what OP was talking about in their post.


In the same way in Rust you can download a package with Cargo and use it without reimplementing it, an LLM can download and explore all written human knowledge to produce a solution.

Or how you can efficiently loop over all combinations of all inputs in a short computer program, it will just take awhile!

If you have a programming language where finding an efficient algorithm is a compiler optimization, then your programs can get a lot shorter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: