• jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    55 minutes ago

    Damn that’s interesting. I like how they walked through step by step how they got the exploit to work. This is what actual real hacking is like, but much less glamorous than what you see in the movies.

  • x00z@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 hours ago

    $5,000

    This is like 1/10th of what a good blackhat hacker would have gotten out of it.

    • scarilog@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      I always wonder what’s stopping security researchers from selling these exploits to Blackhat marketplaces, getting the money, waiting a bit, then telling the original company, so they end up patching it.

      Probably break some contractual agreements, but if you’re doing this as a career surely you’d know how to hide your identity properly.

      • filcuk@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 minutes ago

        It’s not worth the risk. If your job is border control, would you be smuggling goods? Maybe some would, but most would not.

        They’re whitehat because they don’t want to take part in illegal activities, or already have and have grown from it.

      • x00z@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 hours ago

        Chances that such an old exploit get found at the same time by a whitehat and a blackhat are very small. It would be hard not to be suspicious.

        • scarilog@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          Yes, but I was saying the Blackhat marketplaces wouldn’t really have much recourse if the person selling the exploit knew how to cover their tracks. i.e. they wouldn’t have anyone to sue or go after.

          • x00z@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 hours ago

            I’m saying blackhat hackers can make far more money off the exploit by itself. I’ve seen far worse techniques being used to sell services for hundreds of dollars and the people behind those are making thousands. An example is the slow bruteforcing of blocked words on YouTube channel as they might have blocked their name, phone number, or address.

            What you’re talking about is playing both sides, and that is just not worth doing for multiple reasons. It’s very obvious when somebody is doing that. People don’t just find the same exploit at the same time in years old software.

    • saltesc@lemmy.world
      link
      fedilink
      English
      arrow-up
      100
      arrow-down
      6
      ·
      10 hours ago

      Our names, numbers, and home addresses used to be in a book delivered to everyone’s door or found stacked in a phone booth on the street. That was normal for generations.

      It’s funny how much fuckwits can change the course of society and how we can’t have nice things.

      • dmtalon@infosec.pub
        link
        fedilink
        English
        arrow-up
        47
        arrow-down
        4
        ·
        10 hours ago

        Right, but when everyone got phone books, those were only shared locally in the town. It would be pretty hard to figure out someones phone number from across the state/country without the internet unless you knew someone in the town.

        You could also pay to be unlisted, which is a luxury long since gone. How cool would it be to make your data ‘unlisted’ by paying a small monthly fee.

        • corsicanguppy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          24
          ·
          9 hours ago

          Phone books from outside my region were available at the library; that place where they store a consolidated collection of books for just anyone to sign out and use.

          • Paradox@lemdro.id
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            I once used one to look up my friend from summer camp. He lived in New York City and I didn’t live anywhere close

            Library had a bunch of NYC phonebooks

          • dmtalon@infosec.pub
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            8 hours ago

            I don’t remember that, however it doesn’t surprise me at least for a radius around your area. I’d be surprised if they had all of them from all the states

            • Jerkface@lemmy.world
              link
              fedilink
              English
              arrow-up
              9
              ·
              edit-2
              5 hours ago

              You could just have them borrow one from whatever other library had it. Hell, you could just call the phone company and order the one you want yourself. Fuck, you could just call 411 and have them look it up for you right then.

      • Stamets@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        10 hours ago

        Still are. I got a phone book delivered a week ago, I shit thee not. Granted I’m on a small island and the book is small too. But like, you can pay to have your number removed from the book. Can you have it removed from this? Not to mention all the 2FA stuff that can be connected to the phone number. Someone clones your number or takes it and suddenly they’ve got access to a whole lot of your login stuff.

  • Zacryon@feddit.org
    link
    fedilink
    English
    arrow-up
    43
    ·
    9 hours ago

    Casually rotating 18,446,744,073,709,551,616 IP addresses to bypass rate limits.

    I am not in IT security, but find it fascinating what clever tricks people use to break (into) stuff.

    In a better world, we might use this energy for advancing humanity instead of looking how we can hurt each other. (Not saying the author is doing that, just lamenting that ITS is necessary due to hostile actors in this world. )

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      8 hours ago

      This doesn’t really work in real life since IPv6 rate limiting is done per /64 block, not per individual IP address. This is because /64 is the smallest subnet allowed by the IPv6 spec, especially if you want to use features like SLAAC and privacy extensions (which most home users would be using)

      SLAAC means that devices on the network can assign their own IPv6. It’s like DHCP but is stateless and doesn’t need a server.

      Privacy extensions means that the IPv6 address is periodically changed to avoid any individual device from being tracked. All devices on an IPv6 network usually have their own public IP, which fixes some things (NAT and port forwarding aren’t needed any more) but has potential privacy issues if one device has the same IP for a long time.

    • Tinidril@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      Those are IPv6 addresses that work a bit differently than IPv4. Most customers only get assigned a single IPv4 address, and even a lot of big data centers only have one or two blocks of 256 addresses. The smallest allocation of IPv6 for a single residential customer is typically a contiguous block of the 18,446,744,073,709,551,616 addresses mentioned.

      If Google’s security team is even marginally competent, they will recognize those contiguous blocks and treat them as they would a single IPv4 address. Every address in that block has the same prefix, and it’s actually easier to track on those prefixes than on the entire address.

  • malloc@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 hours ago

    Google, Apple, and rest of big tech are pregnable despite their access to vast amounts of capital, and labor resources.

    I used to be a big supporter of using their “social sign on” (or more generally speaking, single sign on) as a federated authentication mechanism. They have access to brilliant engineers thus naively thought - "well these companies are well funded, and security focused. What could go wrong having them handle a critical entry point for services?”

    Well as this position continues to age poorly, many fucking aspects can go wrong!

    1. These authentication services owned by big tech are much more attractive to attack. Finding that one vulnerability in their massive attack vector is difficult but not impossible.
    2. If you use big tech to authenticate to services, you are now subject to the vague terms of service of big tech. Oh you forgot to pay Google store bill because card on file expired? Now your Google account is locked out and now lose access to hundreds of services that have no direct relation to Google/Apple
    3. Using third party auth mechanisms like Google often complicate the relationship between service provider and consumer. Support costs increase because when a 80 yr old forgot password or 2FA method to Google account. They will go to the service provider instead of Google to fix it. Then you spend inordinate amounts of time/resources trying to fix issue. These costs eventually passed on to customer in some form or another

    Which is why my new position is for federated authentication protocols. Similar to how Lemmy and the fediverse work but for authentication and authorization.

    Having your own IdP won’t fix the 3rd issue, but at least it will alleviate 1st and 2nd concerns

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 hours ago

    Most service providers like Vultr provide /64 ip ranges, which provide us with 18,446,744,073,709,551,616 addresses. In theory, we could use IPv6 and rotate the IP address we use for every request, bypassing this ratelimit.

    This usually doesn’t work, as IPv6 rate limiting is usually done per /64 range (which is the smallest subnet allowed per the IPv6 spec), not per individual IP.

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    10 hours ago

    I set up my GranCentral, now Google Voice, account using a VoIP number from a company that went defunct many years ago. My Google accounts use said Google Voice phone number to validate because GrandCentral wasn’t owned by Google back then. I assume this use case is so small, there is no point fixing it. So essentially, my accounts fall into a loop where google leads to google, etc.

    heh

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      I did something of the opposite. I had a Verizon number. I moved it to Google voice. I had a second Google voice number that then became a google fi number. So now I have a Verizon coded google voice number (that my bank accepts etc), and a google fi number that was originally a google voice number. I’m curious how this honestly effects me. My work numbers have never been associated with my personal accounts so there’s that.

  • hansolo@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    9 hours ago

    F. This will be moved to an OSINT tool within a week, and scraped into a darkweb database by next Friday.

    • Sandbar_Trekker@lemmy.today
      link
      fedilink
      English
      arrow-up
      19
      ·
      7 hours ago

      I think you missed the part at the very end of the page that showed the timeline of them reporting the vulnerability back in April, being rewarded for finding the vulnerability, the vulnerability being patched in May, and being allowed to publicize the vulnerability as of today.

  • IllNess@infosec.pub
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 hours ago

    Eventually, I had a PoC running, but I was still getting the captcha? It seemed that for whatever reason, datacenter IP addresses using the JS disabled form were always presented with a captcha, damn!

    The simplest answer is probably the right one. They are used for bots.

  • rollmagma@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    31
    ·
    10 hours ago

    God, I hate security “researchers”. If I posted an article about how to poison everyone in my neighborhood, I’d be getting a knock on the door. This kind of shit doesn’t help anyone. “Oh but the state-funded attackers, remember stuxnet”. Fuck off.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      37
      ·
      9 hours ago

      Without researchers like that, someone else would figure it out and use it maliciously without telling anyone. This researcher got Google to close the loophole that the exploit requires before publicly disclosing it.

      • rollmagma@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        7
        ·
        5 hours ago

        That’s the fallacy I’m alluding to when I mention stuxnet. We have really well funded, well intentioned, intelligent people creating tools, techniques and overall knowledge in a field. Generally speaking, some of these findings are more makings then findings.

    • TipRing@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      9 hours ago

      This disclosure was from last year and the exploit was patched before the researcher published the findings to the public.

    • ryry1985@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 hours ago

      I think the method of researching and then informing the affected companies confidentially is a good way to do it but companies often ignore these findings. It has to be publicized somehow to pressure them into fixing the problem.

      • rollmagma@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 hours ago

        Indeed, then it becomes a market and it incentivises more research on that area. Which I don’t think is helpful for anyone. It’s like your job description being “professional pessimist”. We could be putting that amount of effort into building more secure software to begin with.

    • Imgonnatrythis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 hours ago

      I think it’s important for users to know how vulnerable they really are and for providers to have a fire lit under their ass to patch holes. I think it’s standard practice to alert providers to these finds early, but I’m guessing a lot of them already knew about the vulnerabilities and often don’t give a shit.

      Compared to airing this dirty laundry I think the alternatives are potentially worse.

      • rollmagma@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        5 hours ago

        Hmm I don’t know… Users usually don’t pay much attention to security. And the disclosure method actively hides it from the user until it no longer matters.

        For providers, I understand, but can’t fully agree. I think it’s a misguided culture that creates busy-work at all levels.