• girlfreddy@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    A small blurb from The Guardian on why Andres Freund went looking in the first place.

    So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.

  • d3Xt3r@lemmy.nzM
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    This is informative, but unfortunately it doesn’t explain how the actual payload works - how does it compromise SSH exactly?

    • uis@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      There is RedHat’s patch for OpenSSH that adds something for systemd, which adds libsystemd as dependency, which has liblzma as its own dependency.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      It allows a patched SSH client to bypass SSH authentication and gain access to a compromised computer

      • d3Xt3r@lemmy.nzM
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        From what I’ve heard so far, it’s NOT an authentication bypass, but a gated remote code execution.

        There’s some discussion on that here: https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b

        But it would be nice to have a similar digram like OP’s to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          Under the right circumstances this interference could potentially enable a malicious actor to break sshd authentication and gain unauthorized access to the entire system remotely. —Wikipedia, sourced to RedHat

          Of course, the authentication bypass allows remote code execution.

        • underisk@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I think ideas about prevention should be more concerned with the social engineering aspect of this attack. The code itself is certainly cleverly hidden, but any bad actor who gains the kind of access as Jia did could likely pull off something similar without duplicating their specific method or technique.

          • whereisk@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Ideally you need a double-blind checking mechanism definitionally impervious to social engineering.

            That may be possible in larger projects but I doubt you can do much in where you have very few maintainers.

            I bet the lesson here for future attackers is: do not affect start-up time.

            • underisk@lemmy.ml
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              I imagine if this attacker wasn’t in a rush to get the backdoor into the upcoming Debian and Fedora stable releases he would have been able to notice and correct the increased CPU usage tell and remain undetected.

        • baseless_discourse@mander.xyz
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          I am not a security expert, but the scenario they describe sounds exactly like authentication bypass to me.

          According to https://www.youtube.com/watch?v=jqjtNDtbDNI the software installs a malicious library that overwrite the signature verification function of ssh.

          I was wondering if the bypass function was designed to be slightly less resource intensive, it probably won’t be discovered and will be shipped to production.

          Also I have mixed feeling about dynamic linking, on the one hand, it allows projects like harden malloc to easily integrate into the system, on the other hand, it also enables the attacker to hijack the library in a similar fashion.

          • Aatube@kbin.melroy.org
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            5.6.1 in fact made it less resources-intensive, but the distro happened to not have updated yet when Freund discovered the backdoor.

          • Cochise@lemmy.eco.br
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Authentication bypass should give you interactive access. “I’m in” like. Remote code execution only allows you to run a command, without permanent access. You can use some RCE vulnerabilities to bypass authentication, but not all.

            • baseless_discourse@mander.xyz
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              3 months ago

              Yeah, but the malicious code replaces the ssh signature verification function to let it allow a specific signature. Hence attacker, with the key, can ssh into any system without proper authentication by ssh.

              This kind of describes authentication by-pass, not just remote code execution…

      • The Doctor@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        Just because somebody picked a vaguely Chinese-sounding handle doesn’t mean much about who or where.

      • Potatos_are_not_friends@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Can’t confirm but unlikely.

        Via https://boehs.org/node/everything-i-know-about-the-xz-backdoor

        They found this particularly interesting as Cheong is new information. I’ve now learned from another source that Cheong isn’t Mandarin, it’s Cantonese. This source theorizes that Cheong is a variant of the 張 surname, as “eong” matches Jyutping (a Cantonese romanisation standard) and “Cheung” is pretty common in Hong Kong as an official surname romanisation. A third source has alerted me that “Jia” is Mandarin (as Cantonese rarely uses J and especially not Ji). The Tan last name is possible in Mandarin, but is most common for the Hokkien Chinese dialect pronunciation of the character 陳 (Cantonese: Chan, Mandarin: Chen). It’s most likely our actor simply mashed plausible sounding Chinese names together.

          • ForgotAboutDre@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Could be Chinese creating reasonable doubt. Making this sort of mistake makes explanations that this wasn’t Chinese sound plausible. Even if evidence other than the name comes out, this rebuttal can be repeated and create confusion amongst the public, reasonable suspicions against accusers and a plausible excuse for other states to not blame China (even if they believe it was China).

            Confusion and multiple narratives is a technique carried out often by Soviet, Russian and Chinese government. We are unlikely to be able to answer the question ourselves. It will be up to the intelligence agencies to do that.

            If someone wanted to blame China for this, they would take the name of a real Chinese person to do it. There is over a billion real people they could take a name from. It unlikely that a person creating a name for someone for this type of espionage would make a mistake like picking an implausible name accidentally.

            • fluxion@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 months ago

              I’m not suggesting one way or another, only that the quoted explanation taken at face value isn’t suggesting China based on name analysis.

              There’s also no reason to assume a nation state. This is completely within the realm of a single or small group of hackers. Organized crime another possibility. Errors with naming are plausible just as the initial mistakes with timing analysis and valgrind errors.

              Even assuming a nation state, you name Russia as a possibility. Russia has shown themselves to be completely capable of errors, in their hacks (2016 election interference that was traced back to their intelligence base), their wars, their assassination attempts, etc.

              And to me it doesn’t seem any more likely that China would point to themselves but sprinkle doubt with inconsistent naming versus just outright pointing to someone else.

              It’s all guesses, nothing points one way or another. I think we agree on that.

              • ForgotAboutDre@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                A big part of it is also letting other people know you did it. China and Russia are big on this. The create dangerous situations, then say they aren’t responsible all while sowing confusion. The want plausible deniability, confusion and credit for doing it.

  • EmperorHenry@discuss.tchncs.de
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    At least microsoft is honest enough to admit their software needs protection, unlike apple and unlike most of the people who have made distros of linux. (edit: microsoft is still dishonest about what kind of protection it needs though)

    Even though apple lost a class action lawsuit for false advertising over the claim “mac can’t get viruses” they still heavily imply that it doesn’t need an antivirus.

    any OS can get infected, it’s just a matter of writing the code and finding a way to deliver it to the system…Now you might be thinking “I’m very careful about what I click on” that’s a good practice to have, but most malware gets delivered through means that don’t require the user to click on anything.

    You need an antivirus on every computer you have, linux, android, mac, windows, iOS, all of them. There’s loads of videos on youtube showing off how well or not so well different antivirus programs work for windows and android.

    • Possibly linux@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      A “antivirus” tends to be a proprietary black box. Such “antivirus” programs could not of detected the XZ backdoor

          • EmperorHenry@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Prevention and detection

            Most of the time, detection also means prevention, but with a whitelisting antivirus, prevention often means that the threat isn’t detected, it was just prevented from running.

            A whitelisting application has a list of what it knows it bad AND what it knows in advance to be good.

            Anything it can’t identify on the spot is treated as unknown and not allowed to run, not deleted, not quarantined, just blocked from running until the user can upload it to things like virustotal and other services like it to figure out if its safe.

            upload it to virustotal, if it wasn’t already known, do a re-scan a few hours later to see if it’s malicious, if it was already known, do a re-scan to see if anything has figured out if its malicious.

            which is why I think it’s borderline criminal that most antivirus programs don’t work that way.

            • Portable4775@lemmy.zip
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              A whitelisting application has a list of what it knows it bad AND what it knows in advance to be good.

              How would it know this? Is this defined by a person/people? If so, that wouldn’t have mattered. liblzma was known in advance to be good, then the malicious update was added, and people still presumed that it was good.

              This wasn’t a case of some random package/program wreaking havoc. It was trusted malicious code.

              Also, you’re asking for an antivirus that uploads and uses a sandbox to analyze ALL packages. Good luck with that. (AVs would probably have a hard time detecting malicious build actions, anyways).

              • EmperorHenry@discuss.tchncs.de
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                Also, you’re asking for an antivirus that uploads and uses a sandbox to analyze ALL packages. Good luck with that. (AVs would probably have a hard time detecting malicious build actions, anyways).

                three different antivirus programs already do that. Comodo for example has a built in sandbox to do that.

                • Portable4775@lemmy.zip
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  It places unknown/new software in a sandbox. You want an AV that tests all pre-existing packages in a sandbox.

  • UnityDevice@startrek.website
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    If this was done by multiple people, I’m sure the person that designed this delivery mechanism is really annoyed with the person that made the sloppy payload, since that made it all get detected right away.

    • bobburger@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I like to imagine this was thought up by some ambitious product manager who enthusiastically pitched this idea during their first week on the job.

      Then they carefully and meticulously implemented their plan over 3 years, always promising the executives it would be a huge pay off. Then the product manager saw the writing on the wall that this project was gonna fail. Then they bailed while they could and got a better position at a different company.

      The new product manager overseeing this project didn’t care about it at all. New PM said fuck it and shipped the exploit before it was ready so the team could focus their work on a new project that would make new PM look good.

      The new project will be ready in just 6-12 months, and it is totally going to disrupt the industry!

      • nxdefiant@startrek.website
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I see a dark room of shady, hoody-wearing, code-projected-on-their-faces, typing-on-two-keyboards-at-once 90’s movie style hackers. The tables are littered with empty energy drink cans and empty pill bottles.

        A man walks in. Smoking a thin cigarette, covered in tattoos and dressed in the flashiest interpretation of “Yakuza Gangster” imaginable, he grunts with disgust and mutters something in Japanese as he throws the cigarette to the floor, grinding it into the carpet with his thousand dollar shoes.

        Flipping on the lights with an angry flourish, he yells at the room to gather for standup.

  • JoeKrogan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.

    In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.

    I’d also love to see systems hardened by default.

    • Amju Wolf@pawb.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Packages or dependencies with only one maintainer that are this popular have always been an issue, and not just a security one.

      What happens when that person can’t afford to or doesn’t want to run the project anymore? What if they become malicious? What if they sell out? Etc.

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      In the words of the devs in that security email, and I’m paraphrasing -

      “Lots of people giving next steps, not a lot people lending a hand.”

      I say this as a person not lending a hand. This stuff over my head and outside my industry knowledge and experience, even after I spent the whole weekend piecing everything together.

      • JoeKrogan@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        You are right, as you note this requires a set of skills that many don’t possess.

        I have been looking for ways I can help going forward too where time permits. I was just thinking having a list of possible targets would be helpful as we could crowdsource the effort on gitlab or something.

        I know the folks in the lists are up to their necks going through this and they will communicate to us in good time when the investigations have concluded.

    • suy@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      no more patching fuzzers to allow that one program to compile. Fix the program

      Agreed.

      Remember Debian’s OpenSSL fiasco? The one that affected all the other derivatives as well, including Ubuntu.

      It all started because OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID. Who the hell relies on uninitialized memory ever? The Debian maintainer wanted to fix Valgrind errors, and submitted a patch. It wasn’t properly reviewed, nor accepted in OpenSSL. The maintainer added it to the Debian package patch, and then everything after that is history.

      Everyone blamed Debian “because it only happened there”, and definitely mistakes were done on that side, but I surely blame much more the OpenSSL developers.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        OpenSSL did add to the entropy pool a bunch uninitialized memory and the PID.

        Did they have a comment above the code explaining why it was doing it that way? If not, I’d blame OpenSSL for it.

        The OpenSSL codebase has a bunch of issues, which is why somewhat-API-compatible forks like LibreSSL and BoringSSL exist.

    • Socsa@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      This has always been the case. Maybe I work in a unique field but we spend a lot of time duplicating functionality from open source and not linking to it directly for specifically this reason, at least in some cases. It’s a good compromise between rolling your own software and doing a formal security audit. Plus you develop institutional knowledge for that area.

      And yes, we always contribute code back where we can.

      • datelmd5sum@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        We run our forks not because of security, but because pretty much nothing seems to work for production use without some source code level mods.

  • ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    I have heard multiple times from different sources that building from git source instead of using tarballs invalidates this exploit, but I do not understand how. Is anyone able to explain that?

    If malicious code is in the source, and therefore in the tarball, what’s the difference?

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Because m4/build-to-host.m4, the entry point, is not in the git repo, but was included by the malicious maintainer into the tarballs.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          The tarballs are the official distributions of the source code. The maintainer had git remove the malicious entry point when pushing the newest versions of the source code while retaining it inside these distributions.

          All of this would be avoided if Debian downloaded from GitHub’s distributions of the source code.

          • Corngood@lemmy.ml
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            All of this would be avoided if Debian downloaded from GitHub’s distributions of the source code, albeit unsigned.

            In that case they would have just put it in the repo, and I’m not convinced anyone would have caught it. They may have obfuscated it slightly more.

            It’s totally reasonable to trust a tarball signed by the maintainer, but there probably needs to be more scrutiny when a package changes hands like this one did.

          • barsoap@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            Downloading from github is how NixOS avoided getting hit. On unstable, that is, on stable a tarball gets downloaded.

            Another reason it didn’t get hit is that the exploit is debian/redhat-specific, checking for files and env variables that just aren’t present when nix builds it. That doesn’t mean that nix couldn’t be targeted, though. Also it’s a bit iffy that replacing the package on unstable took in the order of 10 days which is 99.99% build time because it’s a full rebuild. Much better on stable but it’s not like unstable doesn’t get regular use by people, especially as you can mix+match when running NixOS.

            It’s probably a good idea to make a habit of pulling directly from github (generally, VCS). Nix checks hashes all the time so upstream doing a sneak change would break the build, it’s more about the version you’re using being the one that has its version history published. Also: Why not?

            Overall, who knows what else is hidden in that code, though. I’ve heard that Debian wants to roll back a whole two years and that’s probably a good idea and in general we should be much more careful about the TCB. Actually have a proper TCB in the first place, which means making it small and simple. Compilers are always going to be an issue as small is not an option there but the likes of http clients, decompressors and the like? Why can they make coffee?

            • chameleon@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              3 months ago

              You’re looking at the wrong line. NixOS pulled the compromised source tarball just like nearly every other distro, and the build ends up running the backdoor injection script.

              It’s just that much like Arch, Gentoo and a lot of other distros, it doesn’t meet the gigantic list of preconditions for it to inject the sshd compromising backdoor. But if it went undetected for longer, it would have met the conditions for the “stage3”/“extension mechanism”.

              • barsoap@lemm.ee
                link
                fedilink
                arrow-up
                0
                ·
                3 months ago

                You’re looking at the wrong line.

                Never mind the lines I linked to I just copied the links from search.nixos.org and those always link to the description field’s line for some reason. I did link to unstable twice though this is the correct one, as you can see it goes to tukaani.org, not github.com. Correct me if I’m wrong but while you can attach additional stuff (such like pre-built binaries) to github releases the source tarballs will be generated from the repository and a tag, they will match the repository. Maybe you can do some shenanigans with rebase which should be fixed.

                • chameleon@kbin.social
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  3 months ago

                  For any given tag, GitHub will always have an autogenerated “archive/” link, but the “release/” link is a set of maintainer-uploaded blobs. In this situation, those are the compromised ones. Any distro pulling from an “archive/” link would be unaffected, but I don’t know of any doing that.

                  The problem with the “archive/” links is that GitHub reserves the right to change them. They’re promising to give notice, but it’s just not a good situation. The “release/” links are only going to change if the maintainer tries something funny, so the distro’s usual mechanisms to check the hashes normally suffice.

                  NixOS 23.11 is indeed not affected.

    • harsh3466@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I don’t understand the actual mechanics of it, but it my understanding is that it’s essentially like what happened with Volkswagon and their diesel emissions testing scheme where it had a way to know it was being emissions tested and so it adapted to that.

      The malicious actor had a mechanism that exempted the malicious code when built from source, presumably because it would be more likely to be noticed when building/examining the source.

      • arthur@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        The malicious code is not on the source itself, it’s on tests and other files. The building process hijacks the code and inserts the malicious content, while the code itself is clean, So the co-manteiner was able to keep it hidden in plain sight.

        • sincle354@kbin.social
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          So it’s not that the Volkswagen cheated on the emissions test. It’s that running the emissions test (as part of the building process) MODIFIED the car ITSELF to guzzle gas after the fact. We’re talking Transformers level of self modification. Manchurian Candidate sleeper agent levels of subterfuge.

      • Corngood@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        it had a way to know it was being emissions tested and so it adapted to that.

        Not sure why you got downvoted. This is a good analogy. It does a lot of checks to try to disable itself in testing environments. For example, setting TERM will turn it off.

      • WolfLink@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        The malicious code wasn’t in the source code people typically read (the GitHub repo) but was in the code people typically build for official releases (the tarball). It was also hidden in files that are supposed to be used for testing, which get run as part of the official building process.

    • Subverb@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      The malicious code was written and debugged at their convenience and saved as an object module linker file that had been stripped of debugger symbols (this is one of its features that made Fruend suspicious enough to keep digging when he profiled his backdoored ssh looking for that 500ms delay: there were no symbols to attribute the cpu cycles to).

      It was then further obfuscated by being chopped up and placed into a pure binary file that was ostensibly included in the tarballs for the xz library build process to use as a test case file during its build process. The file was supposedly an example of a bad compressed file.

      This “test” file was placed in the .gitignore seen in the repo so the file’s abscense on github was explained. Being included as a binary test file only in the tarballs means that the malicious code isn’t on github in any form. Its nowhere to be seen until you get the tarball.

      The build process then creates some highly obfuscated bash scripts on the fly during compilation that check for the existence of the files (since they won’t be there if you’re building from github). If they’re there, the scripts reassemble the object module, basically replacing the code that you would see in the repo.

      Thats a simplified version of why there’s no code to see, and that’s just one aspect of this thing. It’s sneaky.