Linus Torvalds has come out strong against proposed support for RISC-V big endian capabilities within the Linux kernel.

In response to a mailing list comment whether RISC-V big endian “BE” patches being worked on would be able to make it for this current Linux kernel cycle. Linus Torvalds initially wrote:

"Oh Christ. Is somebody seriously working on BE support in 2025?

WHY?

Seriously, that sounds like just stupid. Is there some actual real reason for this, or is it more of the “RISC-V is used in academic design classes and so people just want to do endianness for academic reasons”?

Because I’d be more than happy to just draw a line in the sand and say “New endianness problems are somebody ELSES problem”, and tell people to stop being silly.

Let’s not complicate things for no good reason. And there is NO reason to add new endianness.

RISC-V is enough of a mess with the millions of silly configuration issues already. Don’t make it even worse.

Tell people to just talk to their therapists instead. That’s much more productive."

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    58
    arrow-down
    1
    ·
    edit-2
    20 hours ago

    He’s right. I think it was really a mistake for RISC-V to support it at all, and any RISC-V CPU that implements it is badly designed.

    This is the kind of silly stuff that just makes RISC-V look bad.

    Couldn’t agree more. RISC-V even allows configurable endianness (bi-endian). You can have Machine mode little endian, supervisor mode big endian, and user mode little endian, and you can change that at any time. Software can flip its endianness on the fly. And don’t forget that instruction fetch ignores this and is always little endian.

    Btw the ISA manual did originally have a justification for having big endian but it seem to have been removed:

    We originally chose little-endian byte ordering for the RISC-V memory system because little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and Windows for ARM). A minor point is that we have also found little-endian memory systems to be more natural for hardware designers. However, certain application areas, such as IP networking, operate on big-endian data structures, and certain legacy code bases have been built assuming big-endian processors, so we have defined big-endian and bi-endian variants of RISC-V.

    This is a really bad justification. The cost of defining an optional big/bi-endian mode is not zero, even if nobody ever implements it (as far as I know they haven’t). It’s extra work in the specification (how does this interact with big endian?) in verification (does your model support big endian?) etc.

    Linux should absolutely not implement this.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      I guess that could be useful if you’re designing a router OS? Is that even going to be a significant benefit there?

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        10
        ·
        18 hours ago

        Unlikely, you’d do packet processing in hardware, either through some kind of peripheral or if you’re using RISC-V you could add custom instructions.