They provide direct quotes from the papers that support their scoring and also direct links to the full papers.
It’s super easy to just check their conclusions. I followed up on several papers yes and no on the vax question. There was no skullduggery as every paper I looked at was represented fairly in the scoring.
As in other scientific efforts, this is not just a ‘trust me, bro’ situation. They provide references.
Not what I really meant. I was after that one has to trust them to actually provide a suitable and representative coverage on all the papers released on the subject.
I think that concern is partly covered by their scoring. If a bad-faith actor put together a distorted gathering of papers that favored their conclusions but weren’t cited widely, those papers would have very small circles.
So it would be visually apparent that either: they were being dishonest in their research gathering, or the question has not yet been studied widely enough for this tool to be useful.
The more I think about this the more I love this project and their way of displaying the state of consensus on a question.
Something I’ve seen on some PubMed meta-analyses is the inclusion of the various search terms and inclusion/exclusion criteria used; something along those lines maybe?
It’s cool that shows all the papers and not just some abstract metric or yes or no answer.
it’s still only five topics and you really just have to trust the devs that info is accurate and not biased.
They provide direct quotes from the papers that support their scoring and also direct links to the full papers.
It’s super easy to just check their conclusions. I followed up on several papers yes and no on the vax question. There was no skullduggery as every paper I looked at was represented fairly in the scoring.
As in other scientific efforts, this is not just a ‘trust me, bro’ situation. They provide references.
Not what I really meant. I was after that one has to trust them to actually provide a suitable and representative coverage on all the papers released on the subject.
I see, thanks for clarifying.
I think that concern is partly covered by their scoring. If a bad-faith actor put together a distorted gathering of papers that favored their conclusions but weren’t cited widely, those papers would have very small circles.
So it would be visually apparent that either: they were being dishonest in their research gathering, or the question has not yet been studied widely enough for this tool to be useful.
The more I think about this the more I love this project and their way of displaying the state of consensus on a question.
Something I’ve seen on some PubMed meta-analyses is the inclusion of the various search terms and inclusion/exclusion criteria used; something along those lines maybe?