Today, I told an LLM: "do not modify the code, only the unit tests" and guess what it did three times in a row before deciding to mark the test as skipped instead of fixing the test?
AI is weird, but I don't think it has any agency nor did the comment suggest it did.
Today, I told an LLM: "do not modify the code, only the unit tests" and guess what it did three times in a row before deciding to mark the test as skipped instead of fixing the test?
AI is weird, but I don't think it has any agency nor did the comment suggest it did.