• Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    39
    ·
    2 days ago

    Five maps so far. Is someone doing this by hand, the hard way? I figured it was an AI someone programmed, but if it’s an individual or small team, big respect. Very neat project.

    • obsoleteacct@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      ·
      17 hours ago

      I hope they are able to grow this without compromising the quality or vision, because I’m sure they’ll have lots of people willing to get involved, but maybe not all for the most ethical reasons.

    • porksnort@slrpnk.net
      link
      fedilink
      English
      arrow-up
      9
      ·
      21 hours ago

      They provide links to their github that explains their whole methodology. This is a scientific effort and is as transparent and well-documented as a project can be. They provide the code so you can understand the exact mechanics at play or just fork the project if you want to take the work in a different direction.

      It’s a great project and long overdue. I personally think scientific journals are incredibly outdated and haven’t been necessary for a couple of decades. Just put your work on a stable web site and cut out the parasites at the journals.

    • CatsPajamas@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      1 day ago

      AI would probably be pretty useful for this. You’d have to assume most of the “answers” are in the abstract, so you could just build one to scrape academic texts. Use an RAG so it doesn’t hallucinate, maybe. Idk if that violates some T&C nonsense that doing it by hand doesn’t though.

      • entropicdrift@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        This is a bad idea. It’s extremely likely to hallucinate at one point or another no matter how many tools you equip it with, and humans will eventually miss some fully made up citation or completely misrepresented conclusion.

          • entropicdrift@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            I’m a professional software engineer and I’ve used RAG. It doesn’t prevent all hallucinations. Nothing can. The “hallucinations” are a fundamental part of the LLM architecture.

          • obsoleteacct@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            Are the down votes because people genuinely think this is an incorrect answer, or because they dislike anything remotely pro-AI?

              • entropicdrift@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                16 hours ago

                I use LLMs daily as a professional software engineer. I didn’t downvote you and I’m not disengaging my thinking here. RAGs don’t solve everything, and it’s better not to sacrifice scientific credibility to the altar of convenience.

                It’s always been easier to lie quickly than to dig for the truth. AIs are not consistent, regardless of the additional appendages you give them. They have no internal consistency by their very nature.

                • porksnort@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  8 hours ago

                  And this isn’t even really a great application for RAG. Papermaps just goes off of references and citations. Perhaps a sentiment analysis would be marginally useful, but since you need a human to verify all LLM outputs it would be a dubious time savings.

                  The system scores review papers very favorably and the “yes/no/maybe” conclusion is right in the abstract, usually the last sentence or two of it. This is not a prime candidate for any LLM, it’s simple database operations on srtuctured data that already exists. There’s no use case here.

                  • entropicdrift@lemmy.sdf.org
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    edit-2
                    7 hours ago

                    Perhaps a sentiment analysis would be marginally useful, but since you need a human to verify all LLM outputs it would be a dubious time savings.

                    Thank you, yes. That’s exactly my point. You’d need a human to verify all of the outputs anyways, and these are literally machines that exclusively make text that humans find believable, so you’re likely adding to the problem of humans messing stuff up moreso than speeding anything up. Being wrong fast has always been easy, so it’s no help here.