Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.

But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.

I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.





A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.

The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".

So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.


I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided:

1. Declare in advance that AI is being used.

2. Provided verbatim the questions and answer session.

3. Explain why the answer given by the AI is good answer.

Part of the grade will include grading 1, 2, 3

Fair enough.


It’s better than nothing but the problem is students will figure out feeding step 2 right back to the AI logged in via another session to get 3.

This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.

No this is not the way we want learning to be - just like how students are banned from using calculators until they have mastered the foundational thinking.

That's a fair point, but AI can do much more than just provide you with an answer like a calculator.

AI can explain the underlying process of manual computation and help you learn it. You can ask it questions when you're confused, and it will keep explaining no matter how off the topic you go.

We don't consider tutoring bad for learning - quite the contrary, we tutor slower students to help them catch up, and advanced students to help them fulfill their potential.

If we use AI as if it was an automated, tireless tutor, it may change learning for the better. Not like it was anywhere near great as it was.


You're assuming the students are reading any of this. They're not, they're just copy/pasting it.

Well, you can lead the horse to water, but you can't make him drink.

If you assume all students are lazy assholes who want to cheat the system, then I doubt there's anything that would help them learn.


Also so much of the LLMs answer is fluff, when not outright wrong

There is research that shows that banning calculators impedes the learning of maths. It is certainly not obvious to me that calculators will have a negative effect - I certainly always allowed my kids to use them.

LLMs are trickier and use needs to be restricted to stop cheating, just as my kids had restrictions on what calculators they could use in some exams. That does not mean they are all bad or even net bad if used correctly.


> There is research that shows that banning calculators impedes the learning of maths.

I've seen oodles of research concluding the opposite at the primary level (grades 1- 5, say). If your mentioned research exists, it must be very well hidden :-/


There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

> There were 79 studies used in this meta analysis so it cannot be that well hiddne: https://psycnet.apa.org/record/1987-11739-001

From the first page of that study

> Do calculators threaten basic skills? The answer consistently seemed to be no, provided those basic skills have first been developed with paper and pencil.

So, yeah, there are no studies I have found that support any assertion along the lines of:

>>> There is research that shows that banning calculators impedes the learning of maths.

If you actually find any, we still have to consider that things like this meta-study you posted is already 74-studies ahead in confirming that you are wrong.

Best would be for you to find 75 studies that confirm your hypothesis. Unfortunately, even though I read studies all the time, and even at one point had full access via institutional license to full-text of studies, and spent almost all of my after-hours time between 2009 and 2011 actually reading papers on primary/foundational education, I have not seen even one that supports your assertion.

I have read well over a hundred papers on the subject, and did not find one. I am skeptical that you will find any.


  > There is research that shows that banning calculators impedes the learning of maths.
Please share what you know. My search found a heap of opinions and just one study where use of calculators made children less able to calculate by themselves, not the ability to learn and understand math in general.


Calculator don't tell you step by step. AI can.

Symbolic computation is a thing. How do you think wolfram alpha worked for 20 years before AI?

And it’s making that up as well.

Yeah; it gets steps 1-3 right, 4-6 obviously wrong, and then 7-9 subtly wrong such that a student, who needs it step by step while learning, can't tell.

That's roughly what we did as well. Use anything you want, but in the end you have to be able to explain the process and the projects are harder than before.

If we can do more now in a shorter time then let's teach people to get proficient at it, not arbitrarily limit them in ways they won't be when doing their job later.


Props to the teacher for putting in the work to thoughtfully grade an AI transcript! As I typed that I wondered if a lazy teacher might then use AI to grade the students AI transcript?

I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.

This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.


Oh wow this is a great reference/image/metaphor for "software engineers" who misuse these tools - "the great pakledification" of software

Yep, I've seen a couple of folks pretending to be junior PMs, thinking they can replace developers entirely. The problem is, they can't write a spec. They can define a feature at a very high level, on a good day. They resort to asking one AI to write them a spec that they feed to another.

It's slop all the way down.


People have tried that with everything from COBOL to low code. Its even succeeded in some problem domains (e.g. thing people code with spreadsheet formula) but there is no general solution that replaces programmers entirely.

A "commoner"... Could you possibly be more full of yourself?

That was literally the opposite of my intention. Maybe the choice of word wasn't perfect, but basically, I was trying to highlight that domain expertise is still valuable in the specific scenario of software engineering.

The same could be said about any other job, if you put me against a construction worker and give us both expensive power tools, he will still do a better job than me because I have no experience in that domain.


Agree totally.

My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not.

I mean it sounds to me like a beautiful corporate poison. :)

I don’t think this is the issue “yet”. It’s that no matter what class you are, your CEO does not care. Mediocre AI work is enough to give them immense returns and an exit. He’s not looking out for the unfortunate bag holders. The world has always had tolerance for highly distributed crap. See Windows.

This seems like a purely cynical lacking any substantive analysis.

Despite whatever nasty business practices and shitty UX Windows has foisted on the world, there is no denying the tremendous value that it has brought, including impressive backwards compatibility that rivals some of the best platforms in computing history.

AI shovelware pump-n-dump is an entirely different short term game that will never get anywhere near Microsoft levels of success. It's more like the fly-by-nights in the dotcom bubble that crashed and burned without having achieved anything except a large investment.


You misunderstand me. While I left Windows over a decade ago, I recognize it was a great OS in some aspects. I was referring to the recent AI fueled Windows developments and Ad riddled experiences. Someone decided that is fine, and you won't see orgs or regular users drop it...tolerance.

This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.

SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing.

The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem.

Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?


is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally).

Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.

So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.


Most projects don’t change that much either. Head over to a big open source project, and more often you will only see tweaks. To be able to do the tweaks require a very good understanding of the whole project (Naur’s theory of programming).

Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it.


I wonder about how true this was historically. I imagine race car driving had periods of rapid, exciting innovation. But I can see how a lot of it has probably reached levels of optimization where the rules, safety, and technology change well within the realm of diminishing returns. I'm sure there's still a ridiculous about of R&D though? (I don't really know race car driving)

Sure there is crazy levels of R&D but that mostly happens off season or if there is a change in regulations which happen every 4-5 years usually. Interestingly, this year the entire grid starts with new regs and we don't really know the pecking order yet.

But my whole point was that race to race, it really isn't that much different for the teams as the comment implied and I am still kind of lost how it fits to SWE unless you're really stretching things.

Even then, most teams dont even make their own engines etc.


Do you really think that rainy Canada is the same as Jedddah, or Singapore? And what is the purpose of the free practice sessions?

You’ve got the big bet to design the car between the season (which is kinda the big architectural decisions you make at the beginning of the project). Then you got the refinement over the season, which are like bug fixings and performance tweaks. There’s the parts upgrade, which are like small features added on top of the initial software.

For the next season, you either improve on the design or start from scratch depending on what you’ve learned. In the first case, It is the new version of the software. In the second, that’s the big refactor.

I remember that the reserve drivers may do a lot of simulations to provide data to the engineers.


You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.

Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.

This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.


Well a lot of managers view their employees as doing the former, but they’re really doing the latter

> I know a lot of counter arguments are a form of, “but AI is automating that second class of job!”

Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.

Yeah, I'm aware of the lump of labour fallacy.


Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is."

It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."


It depends a lot on the type of industry I would think.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: