• 0 Posts
  • 113 Comments
Joined 1 year ago
cake
Cake day: June 2nd, 2023

help-circle



  • This is my exact same experience. I ask for someone to elaborate on their stance, get told (not accused, told) I’m trolling. Ask for explanation/definition of a concept, get called an idiot shitlib and told to read some theory. Ask for civility, get told I deserve abuse for “endorsing genocide”. (By the way, I absolutely oppose the genocide in Gaza. But I’m a genocide supporter I guess because I won’t flush my vote third party this November.)

    Hexbear is a community that expects you to conform. Every time there is a post like this, someone comes out of the woodwork and says “They’re nice people if you talk like them and agree with them on everything.” It’s cool that you’re not getting abused, but abuse is coming from that space, whether or not it is happening to you.

    It’s a shame because I would like to hear the nuances of their viewpoints, but I can never get them to tell me what they are. Always complaining that nobody tries to understand, but dogpiling on anyone that asks questions. Then they pull up your report history and tell you “It’s just a little dunking bro, stop being a snowflake” for not putting up with it.

    Users of Hexbear, if you’re reading these words, do better. Nobody is going to sympathize with your cause if you antagonize outsiders that want to learn more.


  • This is a good question and your curiosity is appreciated.

    A password that has been properly hashed (the thing they do in that Avalanche Effect Wikipedia entry to scramble the original password in storage) can take trillions of years to crack, and each additional character makes that number exponentially higher. Unless the AI can bring that number to less than 90 days - a fairly standard password change frequency for corporate environments - or heck, just less than 100 years so it can be done within the hacker’s lifetime, it’s not really going to matter how much faster it becomes.

    The easier method (already happening in fact) is to use an LLM to scan a person’s social media and then reach out to relatives pretending to be that person, asking for bail money, logins etc. If the data is sufficiently locked down, the weakest link will be the human that knows how to get to it.