• Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 day ago

    Five maps so far. Is someone doing this by hand, the hard way? I figured it was an AI someone programmed, but if it’s an individual or small team, big respect. Very neat project.

    • obsoleteacct@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      I hope they are able to grow this without compromising the quality or vision, because I’m sure they’ll have lots of people willing to get involved, but maybe not all for the most ethical reasons.

    • porksnort@slrpnk.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      They provide links to their github that explains their whole methodology. This is a scientific effort and is as transparent and well-documented as a project can be. They provide the code so you can understand the exact mechanics at play or just fork the project if you want to take the work in a different direction.

      It’s a great project and long overdue. I personally think scientific journals are incredibly outdated and haven’t been necessary for a couple of decades. Just put your work on a stable web site and cut out the parasites at the journals.

    • CatsPajamas@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      22 hours ago

      AI would probably be pretty useful for this. You’d have to assume most of the “answers” are in the abstract, so you could just build one to scrape academic texts. Use an RAG so it doesn’t hallucinate, maybe. Idk if that violates some T&C nonsense that doing it by hand doesn’t though.

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        18 hours ago

        This is a bad idea. It’s extremely likely to hallucinate at one point or another no matter how many tools you equip it with, and humans will eventually miss some fully made up citation or completely misrepresented conclusion.

          • entropicdrift@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            I’m a professional software engineer and I’ve used RAG. It doesn’t prevent all hallucinations. Nothing can. The “hallucinations” are a fundamental part of the LLM architecture.

          • obsoleteacct@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            Are the down votes because people genuinely think this is an incorrect answer, or because they dislike anything remotely pro-AI?

              • entropicdrift@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                5 hours ago

                I use LLMs daily as a professional software engineer. I didn’t downvote you and I’m not disengaging my thinking here. RAGs don’t solve everything, and it’s better not to sacrifice scientific credibility to the altar of convenience.

                It’s always been easier to lie quickly than to dig for the truth. AIs are not consistent, regardless of the additional appendages you give them. They have no internal consistency by their very nature.

  • Korkki@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    1 day ago

    It’s cool that shows all the papers and not just some abstract metric or yes or no answer.

    it’s still only five topics and you really just have to trust the devs that info is accurate and not biased.

    • porksnort@slrpnk.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      They provide direct quotes from the papers that support their scoring and also direct links to the full papers.

      It’s super easy to just check their conclusions. I followed up on several papers yes and no on the vax question. There was no skullduggery as every paper I looked at was represented fairly in the scoring.

      As in other scientific efforts, this is not just a ‘trust me, bro’ situation. They provide references.

      • Korkki@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        Not what I really meant. I was after that one has to trust them to actually provide a suitable and representative coverage on all the papers released on the subject.

        • porksnort@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I see, thanks for clarifying.

          I think that concern is partly covered by their scoring. If a bad-faith actor put together a distorted gathering of papers that favored their conclusions but weren’t cited widely, those papers would have very small circles.

          So it would be visually apparent that either: they were being dishonest in their research gathering, or the question has not yet been studied widely enough for this tool to be useful.

          The more I think about this the more I love this project and their way of displaying the state of consensus on a question.

        • UniversalBasicJustice@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          Something I’ve seen on some PubMed meta-analyses is the inclusion of the various search terms and inclusion/exclusion criteria used; something along those lines maybe?

  • Jokulhlaups@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    Please add a section about nature! Global warming, deforestation, and other human effects on nature.

    • porksnort@slrpnk.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      You can suggest new maps. They ask for links to papers, so if this is a thing you are passionate about and have some recent papers, especially review papers. Reviews seem to get more points in their schemes.

      I love this project too and have a personal passion in neurobiology studies related to benefits of yoga. When I have a couple of hours, I will submit a map suggestion for that topic.

    • procesd@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      From the docs in GitHub: “The size of the dots corresponds to the number of reviewed papers for literature reviews (non-reviews have the smallest size)…”

    • PKscope@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I wondered the same. It doesn’t seem to correlate to P-Size, citations, or participants. Maybe a combined factor of each that is calculated? I’m really not sure.

    • 48954246@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Couldn’t quite work that out either. I initially thought it might have been to do with the number of citations but that didn’t pan out