Hacker Newsnew | past | comments | ask | show | jobs | submit | AloysB's commentslogin

Yann Lecun warned that closed sourced models are the only true danger we are facing with LLMs (answering a question about "Will AI turn into Terminator" type of question).

He was right.


Give it a read, he mentions briefly how he uses for PR triages and resolving GH issues.

He doesn't go in details, but there is a bit:

> Issue and PR triage/review. Agents are good at using gh (GitHub CLI), so I manually scripted a quick way to spin up a bunch in parallel to triage issues. I would NOT allow agents to respond, I just wanted reports the next day to try to guide me towards high value or low effort tasks.

> More specifically, I would start each day by taking the results of my prior night's triage agents, filter them manually to find the issues that an agent will almost certainly solve well, and then keep them going in the background (one at a time, not in parallel).

This is a short excerpt, this article is worth reading. Very grounded and balanced.


Okay I think this somewhat answers my question. Is this individual a solo developer? “Triaging GitHub issues” sounds a bit like open source solo developer.

Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.

I remain unconvinced that agentic AI scales beyond solo development, where the individual is liable for the output of the agents. More precisely, I can use agentic AI to write my code, but at the end of the day when I submit it to my org it’s my responsibility to understand it, and guarantee (according to my personal expertise) its security and reliability.

Conversely, I would fire (read: reprimand) someone so fast if I found out they submitted code that created a vulnerability that they would have reasonably caught if they weren’t being reckless with code submission speed, LLM or not.

AI will not revolutionize SWE until it revolutionizes our processes. It will definitely speed us up (I have definitely become faster), but faster != revolution.


> Guess I’m just desperate for an article about how organizations are actually speeding up development using agentic AI. Like very practical articles about how existing development processes have been adjusted to facilitate agentic AI.

They probably aren't really. At least in orgs I worked at, writing the code wasn't usually the bottleneck. It was in retrospect, 'context' engineering, waiting for the decision to get made, making some change and finding it breaks some assumption that was being made elsewhere but wasn't in the ticket, waiting for other stakeholders to insert their piece of the context, waiting for $VENDOR to reply about why their service is/isn't doing X anymore, discovering that $VENDOR_A's stage environment (that your stage environment is testing against for the integration) does $Z when $VENDOR_B_C_D don't do that, etc.

The ecosystem as a whole has to shift for this to work.


The author of the blog made his name and fortune founding Hashicorp, makers of Vagrant and Terraform among other things. Having done all that in his twenties he retired as the CTO and reappeared after a short hiatus with a new open source terminal, Ghostty.


I had a bit of an adjustment of my beliefs since writing these comments. My current take:

  - AI is revolutionizing how individuals work
  - It is not clear yet how AI can revolutionize how organizations work (even SWE)


If you had that article, would you read it fully before firing off questions?


Can't believe you don't know who the author is my man.


Generally don’t pay attention to names unless it’s someone like Torvalds, Stroustrop, or Guido. Maybe this guy needs another decade of notoriety or something.


So, only 3 old dudes. Is that it? What's wrong with looking up to new and upcoming developers.


The author is the founder of Hasicorp. He created Vault and Terraform, among others.


Curious, do you think his name should be as well known as Torvalds, Stroustrup, and Guido, who combined have ~120 years of serious contribution to the way that we write software, and continue to influence?

Because that’s the implication that I’m getting from downvotes + this reply.

Sure, Terraform is huge no doubt, but it’s no Linux, C++, or Python, yet. Correct me if I’m wrong, but I assume since they’re no longer involved with Hashicorp they’re no longer contributing to Terraform?


Different folks are interested in different niches. I don't know this author either. I would know many names from other subfields, though.

I once went to a meetup where the host introduced the speaker with "he needs no introduction". Well to this day I've no idea who the speaker was. Familiarity really shouldn't be assumed beyond a very, very small handful of people.


I came here to say the exact same thing. This is so refreshing.

I thoroughly enjoyed it. No BS, no ads, no sale pitch, no AI, no pretending, nothing. Just a stranger sharing with the world a project he built at home.

Those picture of the welds are inspiring. It is as honest as it gets. Loved it.

Thank you.


It's a thing made with no attempt to become rich and famous.

What the heck is it doing on HN's front page?


Moral of your story.

Each and everyone of us is able to write their own story, and come up with their own 'Moral'.

Settling for less (if AI is a productivity booster, which is debatable) doesn't equal being screwed. There is wisdom in reaching your 'enough' point.


If you look at the current hiring trends and how much longer it is taking developers to get jobs these days, a mid level ticket taker is definitely screwed between a flooded market, layoffs and AI.

By definition, this is the worse AI coding will ever be and it’s pretty good now.


> By definition, this is the worse AI coding will ever be

This may be true, but it's not necessarily true, and certainly not by definition. For example, formal verification by deductive methods has improved over the past four decades, and yet, by the most important measures, it's got worse. That's because the size of software it can be used to verify affordably has grown, but significantly slower than the growth in the size of the average software project. I.e. it can be used on a smaller portion of software than it could be used on decades ago.

Perhaps ironically, some people believe that the solution to this problem is AI coding agents that will write correctness proofs, but that is based on the hope that their fate will be different, i.e. that their improvement will outpace the growth in software size.

Indeed, it's possible that AI coding will make some kinds of software so cheap that their value will drop to close to zero, and the primary software creation activity by professionals will shift precisely to those programs that agents can't (yet) write.


In the past these trends were cyclical though. We're coming from an expansion phase (mainly driven by the COVID IT and AI craze) and now going through stagnation towards recession (global manufacturing crisis pulling our service sector down with it). This mirrors the hiring trends (or demand for workers). I'm not sure why you wouldn't expect the pendulum to swing back at some point.


I have been in this industry for a long time since 1996.

The 2000 dot com bust wasn’t because all of the ideas were bad most weren’t. They were too soon and before high speed internet was ubiquitous at home let alone in everyone’s pocket.

Incidentally, back then I was a regular old Windows enterprise developer in Atlanta and there were plenty of jobs available at boring companies.

In 2008 was a general shit show for everone. But for tech, the what we now know as the BigTech companies were hiring like crazy and growing old crazy. Just based on the law of large numbers, they aren’t going to grow over the next decade like they grew over the last decade.

They have proven that they can keep going and keep dominating with less people. AI is already started automating the jobs of mid level ticket takers and it’s only going go get worse. Just like factory jobs aren’t coming back.


I am really not convinced yet.

From all the data I have seen, the software industry is poised for a lot more growth in the foreseeable future.

I wonder if we are experiencing a local minima, on a longer upward trend.

Those that do find a job in a few days aren't online to write about it, so based on what is online we are lead to believe that it's all doom and gloom.

We also come out of a silly growth period where anyone who could sort a list and build a button in React would get hired.

My point is not that AI-coding is to be avoided at all costs, it's more about taming the fear-mongering of "you must use AI or will fall behind". I believe it's unfounded - use it as much or as little as you feel the need to.

P.S.: I do think that for juniors it's currently harder and require intentional efforts to land that first job - but that is the case in many other industries. It's not impossible, but it won't come on a silver plate like it did 5-7 years ago.


I mean it is online that major tech companies are have laid off a couple of hundred thousand people. What companies are going to absorb all of these people?

Anyone who hires can tell you one open req gets hundreds of applicants within 24 hours. LinkedIn easy apply backs that up.

I have two anecdotes from both sides. I applied for 200 jobs for a bog standard “C#/Python/Typescript” enterprise developer who had AWS experience. I heard crickets and every application had hundreds of applicants - LinkedIn shows you.

Did I mention according to my resume (I only went back 10 years) I had 10 years of experience as a developer including 2.5 leading AWS architecture at a startup and 3.5 actually working at AWS (ProServe)?

I had 8 jobs since 1996 and I’ve always been able to throw my resume up in the air and by the time it landed I would have three offers. LinkedIn showed that my application had hardly been viewed and my resume only downloaded twice.

Well everything I said above is true. But it was really just an experiment while I was waiting for my plan A outreach to work - targeting companies in a niche in AWS where at the time I could reasonably one of the industry experts with major open source contributions to a popular official “AWS Solution” and leaning on my network of directors, CTOs etc that I had established over the years.

None of them were looking for “human LLM code monkeys” that are a dime a dozen.

On the other hand, I’m in the hiring loop at my company. Last year we had over 6000 applicants and a 4% offer rate.

Who is going to absorb or need a bunch of mid level ticket takers in the future with AI improving? Or at least enough to absorb all of the ones who are currently being laid off and the ones coming in?


I love the author work, effect-ts. It's an amazing effort, innovative. I decided to not use it in my project, but I still have an immense respect for it. You can tell the authors have a craftsman mindset. They deeply care about writing solid, robust software.

This article though, is so disappointing. It's pure LLM-lingo, which makes it awful to read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: