Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

  • Cosmonauticus@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    1 day ago

    It’s almost like we shouldn’t value the importance of just passing the exam/ writing the paper and revamp our entire approach to teaching

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 hours ago

        Yup. Although “avoid relying on writing and especially subjective long-form writing” is much more practicable.

    • aceshigh@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      Ditto. A license just means you can pass a test. It doesn’t say anything more than that. That’s why you’re always advised to get 2nd opinions.

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      23 hours ago

      This is what I have been hoping LLMs would provoke since the beginning.

      Testing understanding via asking students to parrot textbooks with changed wording was always a shitty method, and one that de-incentivizes deep learning:

      It allows for teachers that do not understand their field beyond a superficial level to teach, and to evaluate. What happens when a student given the test question “Give an intuitive description of an orbit in your own words” answers by describing orbital mechanics in a relative frame instead of a global frame, when the textbook only mentions global frame? They demonstrate understanding beyond the material which is excellent but all they do is risk being marked down by a teacher who can’t see the connection.

      A student who only memorized the words and has the ability to rearrange them a bit, gets full marks no risk.