> "Incompatible changes should not be introduced lightly to software that has a lot of dependent code. ..."
I certainly agree that “incompatible changes should not be introduced lightly.”
This is agreeing with a sentence that the semver authors didn't write. The clause "that has a lot of dependent code" isn't in there arbitrarily.
What everyone in an ecosystem wants is high quality, easy-to-use, stable packages. In a perfect world populated by programming demigods, v1 of every package would be all three of those. In practice, human software engineers do not design usable APIs and write robust bug-free code without feedback from users. In order to act on that feedback, they need to change their code, which sacrifices stability.
The way this works in other healthy package ecosystems is that packages have a lifecycle. Early in the package's lifetime, it is undergoing rapid, breaking change while it finds its way. It can do that relatively easily because there are a small number of users harmed by the churn. If it gets popular, that implies it has found a good local optimum of design and quality. At that point, stability takes precedence and the package's evolution slows down.
The path to a great library is usually through several versions of a kinda-shitty one. A good package manager supports both maintainers and consumers working on packages at all stages of that lifecycle.
> Able to predict the effects on users more clearly, authors might well make different, better decisions about their changes. Alice might look for way to introduce the new, cleaner API into the original OAuth2 package alongside the existing APIs, to avoid a package split. Moe might look more carefully at whether he can use interfaces to make Moauth support both OAuth2 and Pocoauth, avoiding a new Pocomoauth package. Amy might decide it’s worth updating to Pocoauth and Pocomoauth instead of exposing the fact that the Azure APIs use outdated OAuth2 and Moauth packages. Anna might have tried to make the AWS APIs allow either Moauth or Pocomoauth, to make it easier for Azure users to switch.
Those decisions are only "better" because they route around a difficulty the package manager arbitrarily put in the first place.
There is already plenty of essential friction discouraging package maintainers from shipping breaking changes arbitrarily. Literally receiving furious email from users that have to migrate is pretty high on that list. I don't see value in explicitly adding more friction in the package manager because the package manager authors think they know better than the package maintainer how to serve their users.
> To be clear, this approach creates a bit more work for authors, but that work is justified by delivering significant benefits to users.
Users don't want all of the work pushed onto maintainers. Life needs to be easy for maintainers too, because happy maintainers are how users get lots of stuff to use in the first place. If you push all of the burden onto package maintainers, you end up with a beautiful, brilliantly-lit grocery store full of empty shelves. Shopping is a pleasure but there's nothing to buy because producing is a chore.
Good tools distribute the effort across both kinds of users. There's obviously some amortization involved because a package is consumed more than it's maintained, but I'm leery of any plan that deliberately makes life harder for a class of users, without very clear positive benefit to others. Here, it seems like it makes it harder to ship breaking changes, without making anything else noticeably easier in return.
> They can't just decide to issue v2, walk away from v1, and leave users like Ugo to deal with the fallout. But authors who do that are hurting their users.
Are they hurting users worse than not shipping v2 at all? My experience is that users will prefer an imperfect solution over no solution when given the choice. It may offend our purist sensibilities, but the reality is that lots of good applications add value to the world built on top of mediocre, half-maintained libraries. Even the most beautiful, well-designed, robust packages often went through a period in their life where they were hacky, buggy, or half-abandoned.
A good ecosystem enables packages to grow into high quality over time, instead of trying to gatekeep out anything that isn't up to snuff.
> In Go, if an old package and a new package have the same import path, the new package must be backwards compatible with the old package.
This doesn't define for whom it must be backwards compatible. Breaking changes are not all created equal. Semver is a pessimistic measure. You bump the major version if a change could break at least one user, in theory. In practice, most "breaking" changes do not break most users.
If you remove a function that turned out to not be useful, that's a "breaking" change. But any consumer who wasn't calling that function in the first place is not broken by it. If maintainer A ships a change that doesn't break user B, a good package manager lets user B accept that change as easily as possible.
As far as I can tell, the proposal here requires B to rewrite all of their imports and deal with the fact that their application may now have two versions of that package floating around in their app if some other dependency still used the old version. That's pretty rough.
What you'll probably see is that A just never removes the function even though it's dead weight both for the maintainer and consumer. This scheme encourages packages to calcify at whatever their current level of quality happens to be. That might be fine if the package already happens to be great, but if it has a lot of room for improvement, this just makes it harder to do that improvement.
Well, the article calls for a v0, which seems to be exactly for the use case you describe? There are no import path changes, undergoing "rapid, breaking change" is allowed, and if you ever find a good local optimum you can graduate to v1 without any import path change either. I don't see any requirement to ever move to v1, although users may understandably prefer libraries that do. I don't quite understand what additional support you are looking for from "a good package manager".
I'm also not sure this makes it harder to ship a v2. Sure, users will have to change their import paths, although I'm sure tooling like GoLand can easily automate this. But this also frees library maintainers to do extensive API redesigns, without worrying about breaking everything or hanging their existing users out to dry. In particular, the ability to make v1 depend on (and become a wrapper for) v2 is quite nice. Not only does this pattern not break existing code, but it even allows users who have not yet migrated to the new API to benefit from the active development on the latest branch. And of course there is the potential for some degree of automated migration, through inlining wrapper functions as mentioned in the article.
> This doesn't define for whom it must be backwards compatible. Breaking changes are not all created equal. Semver is a pessimistic measure. You bump the major version if a change could break at least one user, in theory. In practice, most "breaking" changes do not break most users.
I think this mean API breaking usually resulting in packages that won't even compile.
I certainly agree that “incompatible changes should not be introduced lightly.”
This is agreeing with a sentence that the semver authors didn't write. The clause "that has a lot of dependent code" isn't in there arbitrarily.
What everyone in an ecosystem wants is high quality, easy-to-use, stable packages. In a perfect world populated by programming demigods, v1 of every package would be all three of those. In practice, human software engineers do not design usable APIs and write robust bug-free code without feedback from users. In order to act on that feedback, they need to change their code, which sacrifices stability.
The way this works in other healthy package ecosystems is that packages have a lifecycle. Early in the package's lifetime, it is undergoing rapid, breaking change while it finds its way. It can do that relatively easily because there are a small number of users harmed by the churn. If it gets popular, that implies it has found a good local optimum of design and quality. At that point, stability takes precedence and the package's evolution slows down.
The path to a great library is usually through several versions of a kinda-shitty one. A good package manager supports both maintainers and consumers working on packages at all stages of that lifecycle.
> Able to predict the effects on users more clearly, authors might well make different, better decisions about their changes. Alice might look for way to introduce the new, cleaner API into the original OAuth2 package alongside the existing APIs, to avoid a package split. Moe might look more carefully at whether he can use interfaces to make Moauth support both OAuth2 and Pocoauth, avoiding a new Pocomoauth package. Amy might decide it’s worth updating to Pocoauth and Pocomoauth instead of exposing the fact that the Azure APIs use outdated OAuth2 and Moauth packages. Anna might have tried to make the AWS APIs allow either Moauth or Pocomoauth, to make it easier for Azure users to switch.
Those decisions are only "better" because they route around a difficulty the package manager arbitrarily put in the first place.
There is already plenty of essential friction discouraging package maintainers from shipping breaking changes arbitrarily. Literally receiving furious email from users that have to migrate is pretty high on that list. I don't see value in explicitly adding more friction in the package manager because the package manager authors think they know better than the package maintainer how to serve their users.
> To be clear, this approach creates a bit more work for authors, but that work is justified by delivering significant benefits to users.
Users don't want all of the work pushed onto maintainers. Life needs to be easy for maintainers too, because happy maintainers are how users get lots of stuff to use in the first place. If you push all of the burden onto package maintainers, you end up with a beautiful, brilliantly-lit grocery store full of empty shelves. Shopping is a pleasure but there's nothing to buy because producing is a chore.
Good tools distribute the effort across both kinds of users. There's obviously some amortization involved because a package is consumed more than it's maintained, but I'm leery of any plan that deliberately makes life harder for a class of users, without very clear positive benefit to others. Here, it seems like it makes it harder to ship breaking changes, without making anything else noticeably easier in return.
> They can't just decide to issue v2, walk away from v1, and leave users like Ugo to deal with the fallout. But authors who do that are hurting their users.
Are they hurting users worse than not shipping v2 at all? My experience is that users will prefer an imperfect solution over no solution when given the choice. It may offend our purist sensibilities, but the reality is that lots of good applications add value to the world built on top of mediocre, half-maintained libraries. Even the most beautiful, well-designed, robust packages often went through a period in their life where they were hacky, buggy, or half-abandoned.
A good ecosystem enables packages to grow into high quality over time, instead of trying to gatekeep out anything that isn't up to snuff.
> In Go, if an old package and a new package have the same import path, the new package must be backwards compatible with the old package.
This doesn't define for whom it must be backwards compatible. Breaking changes are not all created equal. Semver is a pessimistic measure. You bump the major version if a change could break at least one user, in theory. In practice, most "breaking" changes do not break most users.
If you remove a function that turned out to not be useful, that's a "breaking" change. But any consumer who wasn't calling that function in the first place is not broken by it. If maintainer A ships a change that doesn't break user B, a good package manager lets user B accept that change as easily as possible.
As far as I can tell, the proposal here requires B to rewrite all of their imports and deal with the fact that their application may now have two versions of that package floating around in their app if some other dependency still used the old version. That's pretty rough.
What you'll probably see is that A just never removes the function even though it's dead weight both for the maintainer and consumer. This scheme encourages packages to calcify at whatever their current level of quality happens to be. That might be fine if the package already happens to be great, but if it has a lot of room for improvement, this just makes it harder to do that improvement.