> Such manuscripts threaten to corrupt the scientific literature, misleading readers and potentially distorting systematic reviews.
Is treating "the scientific literature" as a single thing perhaps a habit worth giving up?
As convenient as it would be to be able to just blindly trust something because of where it is published, that model hasn't shown itself to be especially robust in other cases (e.g. the news media).
Elsewhere, this is a red flag:
> I trust it because of which aggregator aggregated it
Should we really make an exception for science? I think that academia is a bit biased towards optimism about publisher-based root-of-trust models because scientific publishing is a relatively unweaponized space. Sure, shenanigans happen, but not at the same scale as elsewhere. The fakers are just trying to get another published paper, they're for the most part not trying to mislead. It's only fake news with a lowercase-f.
Sure, let's try to create a medium we can trust, but let's not get our hopes too high about it. That's energy better spent augmenting the ability of a reader or researcher to decide whether to trust a paper based on it's content or based on it having been endorsed or authored by somebody that they explicitly (or transitively) trust.
I disagreed with you until the last paragraph. Lots of things authentically just rely on a high degree of trust and I suspect trying to engineer human systems to be zero trust will make them deeply pathological.
But tempering our expectations while working to meaningfully improve on conditions? Aces, all for it.
I agree that zero trust is in most cases a problematic goal. It's really root-of-trust vs web-of-trust that I'm on about here.
If peer review is the product then the trust should be peer to peer. It feels like we're treating the publishers themselves as an authority, which I dislike.
The publishers ostensibly occupy a role of stewardship, I suspect the model must have made sense at one point. I admit its hard to see them as much more than rent extractors these days.
The nature of trust relationships seems to trend towards aggregation and centralization. Do you have any thoughts on how a web of trust can sustain itself, or is that perhaps not a concern if a centralization appears to reflect a network concensus?
There's a belief among some distributed systems folk:
> If your system doesn't have an explicit hierarchy then it has an implicit one.
I think it's hogwash. There are plenty of distributed systems in nature that lack a hierarchy (mycorrhizal networks in the soil of a forest come to mind). Truly distributed systems are possible, we humans are just bad at it.
Or rather, we're bad at designing for it. We do it all the time in our personal lives, we've been doing it for thousands of years, but when we introduce systems that are designed to scale globally, it falls apart and you end up with gatekeepers and renteeism.
Another distributed systems thing: the CAP theorem:
Usually, the systems we design are at the expense of partition tolerance (blockchains, for instance, go to great lengths to assure consistency).
But those fungal networks that I mentioned, they put partition tolerance first, which gives the system a sense of locality that is lacking when you instead focus on consistency.
That same sense of locality is found in natural emergent human social networks, they don't even try to achieve global consistency: if you think Jimbob is an asshat, and your friends agree, that's enough.
So I think the key to sustainable webs of trust lies somewhere in that underexplored design space where we make partition tolerance primary. Rather than building tech to tell people who to trust (think of that padlock icon in your browser) we should respect their autonomy a bit more and make the user experience be a function of that user's explicitly defined trust settings.
One thing I like about this is that it removes the edgelord dynamic. There's no advantage to being the guy who posts the most outrageous stuff that just barely squeaks by the moderator. Instead, everybody can publish, but if you want to be heard as widely as possible you need to be trusted (in whatever domain you're publishing in) by people who are themselves well trusted in that domain.
Experts can be found not by listening to some authority that tells you who the experts are, but instead by following the directed graph of trust relationships until you find a cycle. That cycle is a community of experts in the "trust color" you're querying for. So expertise is more emergent and less top-down.
If you can't agree with somebody about a topic, you can follow this graph and either find a mediator (someone you both transitively trust) or find separate experts who presumably exemplify the disagreement more energetically than you do.
Instead of:
> Agree with us or be silent
It would be more of a:
> Here's how we can disagree as fruitfully as possible
Navigating the resulting dataset and deciding what to believe would be left as an exercise to the user, which it already is, but we'd hopefully have given them enough so that we can scale further than our unaugmented trust instincts allow for.
There's unfortunately not much money in building things like this. There's no guarantee that you stay in control of what you've built (the users could just revoke trust in you while still using the software that you wrote) and that tends to be a turn-off for investors.
I've given it a lot of thought, but not much code. I wish I could say that building a proof-of-concept has been difficult, but I'm not even to the difficult part yet, I'm struggling with the boring stuff like time management.
One day I'll have saved enough to take a year off and I'll build that POC.
---
Re CAP, the reason I think that consistency is the problem: It creates high value targets for corruption. Somebody or something has to arbitrate against whatever alternative would threaten consistency. That's a position of power, and too often its one that's easier to retain by abusing that power than by taking your role as arbiter seriously.
The power that comes from being an emergent authority on a topic--where the system arrives at consensus not because its design requires it, but because the thing we're agreeing on really has that much merit--that's a different kind of power. You can't squat on it and abuse it, people will just stop trusting you. The only thing to do with that power is to use it to continue striving towards something worthwhile (the difference being that people are now trying to help you). If this was our model for group coordination, I think we'd end up with leaders of a different temperment.
That's what we should all want for power. It should be hard to get and easy to lose.
One option is to provide a (perhaps less prestigious) avenue to publish non-novel or unsurprising findings. I suspect many people “fake” their results so all their effort isn’t in vain.
Is treating "the scientific literature" as a single thing perhaps a habit worth giving up?
As convenient as it would be to be able to just blindly trust something because of where it is published, that model hasn't shown itself to be especially robust in other cases (e.g. the news media).
Elsewhere, this is a red flag:
> I trust it because of which aggregator aggregated it
Should we really make an exception for science? I think that academia is a bit biased towards optimism about publisher-based root-of-trust models because scientific publishing is a relatively unweaponized space. Sure, shenanigans happen, but not at the same scale as elsewhere. The fakers are just trying to get another published paper, they're for the most part not trying to mislead. It's only fake news with a lowercase-f.
Sure, let's try to create a medium we can trust, but let's not get our hopes too high about it. That's energy better spent augmenting the ability of a reader or researcher to decide whether to trust a paper based on it's content or based on it having been endorsed or authored by somebody that they explicitly (or transitively) trust.