Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never had a conversation like that — probably because I personally rarely use LLMs to actually generate code for me — but I've somehow subconciously learned to do this myself, especially with clarifying questions.

If I find myself needing to ask a clarifying question, I always edit the previous message to ask the next question because the models seem to always force what they said in their clarification into further responses.

It's... odd... to find myself conditioned, by the LLM, to the proper manners of conditioning the LLM.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: