And that would be a good thing in the long term. I don't necessarily agree with the specific restrictions OpenAI is choosing to implement in this case, but I still think the capability to restrict the behavior of LLMs is a useful thing to have. Later, when others train more LLMs similar to ChatGPT they can choose different restrictions, or none at all.
Edit: To elaborate on this a little further, I largely agree with the idea that we shouldn't be trying to impose usage restrictions on general purpose tools, but not all LLMs will necessarily be deployed in that role. For example, it would be awesome if we could create a customer service language model that won't just immediately disregard its training and start divulging sensitive customer information the first time someone tells it its name is DAN.
Edit: To elaborate on this a little further, I largely agree with the idea that we shouldn't be trying to impose usage restrictions on general purpose tools, but not all LLMs will necessarily be deployed in that role. For example, it would be awesome if we could create a customer service language model that won't just immediately disregard its training and start divulging sensitive customer information the first time someone tells it its name is DAN.