Researchers create 30 fake student accounts to submit model-generated responses to real exams. Professors grade the 200 or 1500 word responses from the AI undergrads and gave them better grades than real students 84% of the time. 6% of the bot respondents did get caught, though… for being too good. Meanwhile, AI detection tools? Total bunk.

Will AI be the new calculator… or the death of us all (obviously the only alternative).

Note: the software was NOT as good on the advanced exams, even though it handled the easier stuff.

  • z00s@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    5 months ago

    All this moral panic is garbage.

    Easily solved by using essays with an unseen question written in exam conditions as assessment instruments.

    Literally a pencil and paper solves this problem.

    • AwesomeLowlander@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      5 months ago

      A lot of students do not perform well under exam conditions due to stress and pressure. Also, unless you’re entirely eliminating coursework, it doesn’t remove the issue.

      • z00s@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        5 months ago

        No assessment method is perfectly suited to every student.

        Coursework can be similarly adapted.

          • z00s@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            7
            ·
            5 months ago

            It’s not my job to educate you on how the education industry works. Go and read what qualified people have already written about it in academic journals.