I've biased towards this heavily in the last 8 or so years now.
I've yet to have anyone mistakenly modify anything when they need to pass --commit, when I've repeatedly had people repeatedly accidentally modify stuff because they forgot --dry-run.
Totally agree it shouldn't be for basic tools; but if I'm ever developing a script that performs any kind of logic before reaching out to a DB or vendor API and modifies 100k user records, creating a flag to just verify the sanity of the logic is a necessity.
For most of these local data manipulation type of commands, I'd rather just have them behave dangerously, and rely on filesystems snapshots to rollback when needed. With modern filesystems like zfs or btrfs, you can take a full snapshot every minute and keep it for a while to negate the damage done by almost all of these scripts. They double as a backup solution too.
Yeah, but that's because it's implemented poorly. It literally asks you to confirm deletion of each file individually, even for thousands of files.
What it should do is generate a user-friendly overview of what's to be deleted, by grouping files together by some criteria, e.g. by directory, so you'd only need to confirm a few times regardless of how many files you want to delete.
I've yet to have anyone mistakenly modify anything when they need to pass --commit, when I've repeatedly had people repeatedly accidentally modify stuff because they forgot --dry-run.