Linus Torvalds has come out strong against proposed support for RISC-V big endian capabilities within the Linux kernel.
In response to a mailing list comment whether RISC-V big endian “BE” patches being worked on would be able to make it for this current Linux kernel cycle. Linus Torvalds initially wrote:
"Oh Christ. Is somebody seriously working on BE support in 2025?
WHY?
Seriously, that sounds like just stupid. Is there some actual real reason for this, or is it more of the “RISC-V is used in academic design classes and so people just want to do endianness for academic reasons”?
Because I’d be more than happy to just draw a line in the sand and say “New endianness problems are somebody ELSES problem”, and tell people to stop being silly.
Let’s not complicate things for no good reason. And there is NO reason to add new endianness.
RISC-V is enough of a mess with the millions of silly configuration issues already. Don’t make it even worse.
Tell people to just talk to their therapists instead. That’s much more productive."
He’s right. I think it was really a mistake for RISC-V to support it at all, and any RISC-V CPU that implements it is badly designed.
Couldn’t agree more. RISC-V even allows configurable endianness (bi-endian). You can have Machine mode little endian, supervisor mode big endian, and user mode little endian, and you can change that at any time. Software can flip its endianness on the fly. And don’t forget that instruction fetch ignores this and is always little endian.
Btw the ISA manual did originally have a justification for having big endian but it seem to have been removed:
This is a really bad justification. The cost of defining an optional big/bi-endian mode is not zero, even if nobody ever implements it (as far as I know they haven’t). It’s extra work in the specification (how does this interact with big endian?) in verification (does your model support big endian?) etc.
Linux should absolutely not implement this.
I guess that could be useful if you’re designing a router OS? Is that even going to be a significant benefit there?
Unlikely, you’d do packet processing in hardware, either through some kind of peripheral or if you’re using RISC-V you could add custom instructions.