Hacker Newsnew | past | comments | ask | show | jobs | submit | eikenberry's commentslogin

I think many people are missing the overall meaning of these sorts of posts.. that is they are describing a new type of programmer that will only use agents and never read the underlying code. These vibe/agent coders will use natural(-ish) language to communicate with the agents and wouldn't look at the code anymore than, say, a PHP developer would look at the underlying assembly. It is not the level of abstraction they are working on. There are many use cases where this type of coding will work fine and it will let many people who previously couldn't really take advantage of computers to do so. This is great but in no way will do anything to replace the need for code that requires humans to understand (which, in turn, requires participation in the writing).

Your analogy to PHP developers not reading assembly got me thinking.

Early resistance to high-level (i.e. compiled) languages came from assembly programmers who couldn’t imagine that the compiler could generate code that was just as performant as their hand-crafted product. For a while they were right, but improved compiler design and the relentless performance increases in hardware made it so that even an extra 10-20% boost you might get from perfectly hand-crafted assembly was almost never worth the developer time.

There is an obvious parallel here, but it’s not quite the same. The high-level language is effectively a formal spec for the abstract machine which is faithfully translated by the (hopefully bug-free) compiler. Natural language is not a formal spec for anything, and LLM-based agents are not formally verifiable software. So the tradeoffs involved are not only about developer time vs. performance, but also correctness.


> So the tradeoffs involved are not only about developer time vs. performance, but also correctness.

The "now that producing plausible code is free, verification becomes the bottleneck" people are technically right, of course, but I think they're missing the context that very few projects cared much about correctness to begin with.

The biggest headache I can see right now is just the humans keeping track of all the new code, because it arrives faster than they can digest it.

But I guess "let go of the need to even look at the code" "solves" that problem, for many projects... Strange times!

For example -- someone correct me if I'm wrong -- OpenClaw was itself almost entirely written by AI, and the developer bragged about not reading the code. If anything, in this niche, that actually helped the project's success, rather than harming it.

(In the case of Windows 11 recently.. not so much ;)


> The "now that producing plausible code is free, verification becomes the bottleneck" people are technically right, of course, but I think they're missing the context that very few projects cared much about correctness to begin with.

It's certainly hard to find, in consumer-tech, an example of a product that was displaced in the market by a slower moving competitor due to buggy releases. Infamously, "move fast and break things" has been the rule of the land.

In SaaS and B2B deterministic results becomes much more important. There's still bugs, of course, but showstopper bugs are major business risks. And combinatorial state+logic still makes testing a huge tarpit.

The world didn't spend the last century turning customer service agents and business-process-workers into script-following human-robots for no reason, and big parts of it won't want to reintroduce high levels of randmoness... (That's not even necessarily good for any particular consumer - imagine an insurance company with a "claims agent" that got sweet talked into spending hundreds of millions more on things that were legitimate benefits for their customers, but that management wanted to limit whenever possible on technicalities.)


For a great many software projects no formal spec exists. The code is the spec, and it gets modified constantly based on user feedback and other requirements that often appear out of nowhere. For many projects, maybe ~80% of the thinking about how the software should work happens after some version of the software exists and is being used to do meaningful work.

Put another way, if you don't know what correct is before you start working then no tradeoff exists.


> Put another way, if you don't know what correct is before you start working then no tradeoff exists.

This goes out the window the first time you get real users, though. Hyrum's Law bites people all the time.

"What sorts of things can you build if you don't have long-term sneaky contracts and dependencies" is a really interesting question and has a HUGE pool of answers that used to be not worth the effort. But it's largely a different pool of software than the ones people get paid for today.


> This goes out the window the first time you get real users, though.

Not really. Many users are happy for their software to change if it's a genuine improvement. Some users aren't, but you can always fire them.

Certainly there's a scale beyond which this becomes untenable, but it's far higher than "the first time you get real users".


It's also important to remember that vibe coders throw away the natural language spec each time they close the context window.

Vibe coding is closer to compiling your code, throwing the source away and asking a friend to give you source that is pretty close to the one you wrote.


OK but, I've definitely read the assembly listings my C compiler produced when it wasn't working like I hoped. Even if that's not all that frequent it's something I expect I have to do from time to time and is definitely part of "programming".

> which is faithfully translated by the (hopefully bug-free) compiler.

"Hey Claude, translate this piece of PHP code into Power10 assembly!"


Imagine if high level coding worked like: write a first draft, and get assembly. All subsequent high level code is written in a repl and expresses changes to the assembly, or queries the state of the assembly, and is then discarded. only the assembly is checked into version control.

Or the opposite, all applications are just text files with prompts in them and the assembly lives as ravioli in many temp files. It only builds the code that is used. You can extend the prompt while using the application.

I'm glad you wrote this comment because I completely agree with it. I don't think that there is not a need for software engineers to deeply consider architecture; who can fully understand the truly critical systems that exist at most software companies; who can help dream up the harness capabilities to make these agents work better.

I just am describing what I'm doing now, and what I'm seeing at the leading edge of using these tools. It's a different approach - but I think it'll become the most common way of producing software.


> that is they are describing a new type of programmer that will only use agents and never read the underlying code

> and wouldn't look at the code anymore than, say, a PHP developer would look at the underlying assembly

This really puts down the work that the PHP maintainers have done. Many people spend a lot of time crafting the PHP codebase so you don't have to look at the underlying assembly. There is a certain amount of trust that I as a PHP developer assume.

Is this what the agents do? No. They scrape random bits of code everywhere and put something together with no craft. How do I know they won't hide exploits somewhere? How do I know they don't leak my credentials?


That is true for all languages. Very high quality until you use a lib, a module or an api.

It’s pretty well established that you cannot understand code without having thought things through while writing it. You need to know why things are written the way the are to understand what is written.

Yeah, just reading code does little to help me understand how a program works. I have to break it apart and change it and run it. Write some test inputs, run the code under a debugger, and observe the change in behavior when changing inputs.

If that were true, then only the person who wrote the code could ever understand it enough to fix bugs, which is decidedly not true.

Bubblewrap supports overlayfs mounts [1]. Seems like you should be able to replicate that flow with it.

[1] https://github.com/containers/bubblewrap/issues/412


Without any credentials does network access matter?

Hashicorp has mostly abandoned Vagrant, so I'd avoid it.

This is not true in the US where everything is automatically copyrighted and protected, so nothing goes directly into the public domain (even if the author wants it). Thus no license means that you have no license to use the code legally.

Correct. That's not true in Europe either. IIRC it's not true in Asia either. I don't understand why so many people who don't have even the most basic understanding or experience of licensing feel they must post their opinion as if they were facts. People are certainly entitled to their opinion. But so many comments here are speaking absolute nonsense about licensing as if they were facts. I genuinely don't understand why people feel compelled to do so.

Or any country the US has a reciprocal copyright treaty with, which is all but a vanishingly small set of countries.

A work is protected by copyright the moment it's authored, and all rights are reserved unless it's explicitly licensed otherwise.


The evidence is that many respected people with decades of experience generally agree with them. They are not scientific theories that require validating through testing, they are general advice that is usually true and good to keep in mind.

SysV init was the overengineered cousin to BSD init and I never liked it. Easily my least favorite of all init systems I've worked with over the last 30 years. On the flip side, daemontools or maybe runit were my favorites. Lots of good options for init/supervision tooling over the years and SysV was not among them.

If we look on LFS for its academic merit, I'm saddened that key historical elements of Unix/Linux design are being left behind, much like closing down a wing of a laboratory or museum and telling students that they'll need to whip up their own material to fill in those gaps.

The old versions of LFS are still available to satisfy your curiosity.

Someone should probably save the required source package versions (and patches) before they disappear though


Yes, it's like asking students to actually produce something themselves.

What a horrific thought.


If the students have been well trained, they should be trusted to experiment. If the course curriculum demands that they produce something themselves yet does not educate them on doing so, that's horrific.

Certain things should only be taught as a warning. SysV init is one of them.

Back in the day, system run levels were seen as desirable. SysVinit went in on that concept to the max. So, if the concept of run levels isn't clear to the student beforehand, the init system for making it happen would therefore be mystifying and maybe even inscrutible.

Runlevels may be an interesting idea (e.g. the single-user maintenance level). But a bunch of shell scripts, each complex enough to support different commands, sort-of-declare dependencies, etc, is not such a great idea. A Makefile describing runlevels and service dependencies would be a cleaner design (not necessarily a nicer implementation).

On the contrary, I much prefer a full Turing complete language rather than trying to shoehorn my ideas into someone else's limited system.

The scripts don't have to be complicated, and it doesn't have to be shell scripts. You can use any script or executable that the Linux kernel can load and run. But shell scripts work great and have all the power needed.

Systemd is a giant, flaming heap of buggy ass code. Good riddance to it.


From the announcement, it saddens them too:

> As a personal note, I do not like this decision. To me LFS is about learning how a system works. Understanding the boot process is a big part of that. systemd is about 1678 "C" files plus many data files. System V is "22" C files plus about 50 short bash scripts and data files.

However the reasoning they provide makes sense.. It's hard to build a Linux system with a desktop these days without Sysd.


> It's hard to build a Linux system with a desktop these days without Sysd.

Most Gentoo Linux desktop users disagree. In fact, OpenRC is the default in that distro.

Having said that, I do expect that Gentoo has more manpower available than LFS.


Maybe they're KDE users. I was under the impression that gnome requires it. FTA it sounds like KDE will soon too. Gentoo doesn't come with a desktop by default either, you have to emerge it, which might install systemd..

FTA: "The second reason for dropping System V is that packages like GNOME and soon KDE's Plasma are building in requirements that require capabilities in systemd"


> I was under the impression that gnome requires it.

It doesn't seem to require it at this moment. I have "-systemd" in my USE flags, and have neither sys-apps/systemd nor gnome-base/gnome currently installed. After enabling several USE flags that have nothing to do with systemd [0], emerge was quite happy to offer to install gnome-base/gnome and its dependencies, and absolutely did not offer to install systemd.

Honestly, I don't even know if GNOME has a hard dependency on Wayland... I see many of the dependent packages in the 'gnome-*' categories have an "X" USE flag. I CBA to investigate, though.

Is KDE Plasma building in hard systemd requirements, or is it just building in hard Wayland requirements? I'd known about the latter [1] and -because I'd thought it was important to the KDE folks that KDE runs on BSD- would be surprised if they irreversibly tethered themselves to systemd.

[0] introspection pulseaudio vala server screencast wayland theora eds egl gles2

[1] Though do note that the same blog post that announced the change in policy for Plasma also announced that no other KDE software was going to have a hard dependency on Wayland for the foreseeable future.


Is it? What's the connection between systemd and having a desktop?

Read the article: "The second reason for dropping System V is that packages like GNOME and soon KDE's Plasma are building in requirements that require capabilities in systemd"

If GNOME and KDE were the only desktop solutions, your ''Read the article'' comment would be sensible.

LFS never had academic, educational, or pedagogical merit. It was always sheer faith that by doing enough busywork (except the truly difficult stuff), something might rub off. Maybe you learn some names of component parts. Components change.

Could you expand on this comment please? (I don't think your viewpoint should be so rudely dismissed through downvoting and moving on.) What do you mean?

It's always a little amusing when the Open Source Tea Party bemoans the lack of "the UNIX way" and someone else with actual historical experience (and not misguided nostalgia) brings perspective.

On a related note, X11 was never good and there's a whole chapter in the UNIX-HATERS Handbook explaining why.


It was never good? Weird. Works fine for me.

When will Wayland earn the label "good"? I don't think it currently qualifies.


It works fine for you because...

1. You're using X11 with hardware that is fantastically newer than anything available at the time the UNIX-HATERS Handbook was written.

2. Every graphics vendor that still supports X11 is shipping workarounds for bugs in Xorg.

I used to have a citation for that second one but it went away when Hector Martin dropped off the face of the Internet.


SysV was this weird blind spot for many years. I remember installing daemontools on the OpenBSD server my office ran on because it was nicer to work with, and thinking that the Linux world would switch to avoid losing that particular feature war with Windows.

Gentoo Linux has been using OpenRC for at least as long as I've been using it (~25 years). It's unfortunate that OpenRC was unable to summon the manpower to do the spot-additions required for it to win the political war way back when Debian was looking to move from straight SysV init.

It would have never happened. Systemd wasn't chosen because "we the people" chose it. Things don't work like that anymore, regardless of what the marketing brochure might say.

These laws would have one upside.. Open models would remain open and available. A big problem with at least some of proposed AI regulation is that it could outlaw a growingly important aspect of general purpose computing for the majority of people.

I don't know any proposed laws that limit models. I only know of proposed laws that limit deployment of models.

You forgot a question ..

Did we?

No. We’re adapting to something new in our environment like we always do. It just doesn’t happen overnight.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: