• Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    21 hours ago

    This is pure pseudo intellectualism because you literally have no argument or point.

    You have no reasoning and are projecting that onto me because you can’t explain this opinion your feelings have brought you to.

    • cassandrafatigue@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      16 hours ago

      I’m not willing to argue with you. I’ve argued this with you¹ a thousand times, you are not rational. Everyone who reads your shit knows what I’m talking about. Ask them.

      ¹perhaps with a different name and face, but otherwise indistinguishable. It gets tedious.

      • Credibly_Human@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        13 hours ago

        With the amount you’ve typed you could have easily typed a rationale. The truth is your opinions don’t hold weight and have no good rationale. That is all.

        • korazail@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          11 hours ago

          The burden of proof is on you. Show me one example of a company being held liable (really liable, not a settlement/fine for a fraction of the money they made) for a software mistake that hurt people.

          The reality is that a company can make X dollars with software that makes mistakes, and then pay X/100 dollars when that hurts people and goes to court. That’s not a punishment, that’s a cost of business. And the company pays that fine and the humans who mode those decisions are shielded from further repercussions.

          When you said:

          the idea that the software vendor could not be held liable is farcical

          We need YOU to back that up. The rest of us have seen it never be accurate.

          And it gets worse when the software vendor is a step removed: See flock cameras making big mistakes. Software decided that this car was stolen, but it was wrong. The police intimidated an innocent civilian because the software was wrong. Not only were the police not held accountable, Flock was never even in the picture.

          • Credibly_Human@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            6 hours ago

            Burden of proof? You literally just made your first point about anything I said, I shouldn’t have even wasted the time responding.

            You are really staking your argument on thinking companies don’t get consequences for software fuckups?

            I’m sure you’ll make up some excuses for why somehow none of these count, but the list is so deep you could debate every example here and 10 more would pop up. Tons more happens behind the scenes too with SLA contracts etc.

            This sounds like you knew you were wrong all along but still wanted to be a snide, condescending, and undeservedly arrogant about it.

            The fact that some don’t doesn’t mean that none do.

            You go do some leg work before requesting it of me after thinking you could just troll your way out of your ridiculous initial comment

            Reading your other comments outside this thread, this whole chain seems so illogical. What triggered this bizarre emotional reaction to such a relatively innocuous comment? You must have been reading something in.

            • korazail@lemmy.myserv.one
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 hours ago

              I’m happy you provided a few examples. This is good for anyone else reading along.

              Equifax in 2017: Penalty was, let’s assume the worst case, 700$M. The company in 2017 made 3.3$B, and I’d assume that was after the penalty, but even if it wasn’t, that was a penalty of 27% of revenue. That actually seems like it would hurt.

              TSB in 2022: Fined ~48.6£M by two separate agencies. TSB made 183.5£M in revenue in 2022, still unclear if that was pre- or post- penalty, but this probably actually hurt.

              Uber in 2018: your link suggests Uber avoided any legal discovery that might have exposed their wrongdoing. There are no numbers in the linked article and a search suggest the numbers are not public. Fuck that. A woman was killed by an AI driven car and the family deserves respect and privacy, but uber DOES NOT. Because it’s not a public record, I can’t tell how much they paid out for the death of the victim, and since uber is one of those modern venture-capital-loss-leader companies, this is hard to respond to.

              I’m out of time – and won’t likely be able to finish before the weekend, so trying to wrap up – and Boeing seems complicated and I’m more familiar with Crowdstrike and I know they fucked up. In both cases, I’m not sure how much of a penalty they paid out relative to income.

              I’ll cede the point: There are some companies who have paid a price for making mistakes. When you’re talking companies, though, the only metric is money-paid/money-earned. I would really like there to be criminal penalties for leadership who chase profit over safety, so there’s a bit of ‘wishful thinking’ in my worldview. If you kill someone as a human being (or 300 persons, Boeing), you end up with years in prison, but company just pays 25% of it’s profit that year instead.

              I still think Cassandra is right, and that more often than not, software companies are not held responsible for their mistakes. And I think your other premise, that ‘if software is better at something’ carries a lot: Software is good at explicit computation, such as math, but is historically incapable of empathy (a significant part of the original topic… I don’t want to be a number in a cost/benefit calculation). I don’t want software replacing a human in the loop.

              Back to my example of a flock camera telling the police that a stolen car was identified… the software was just wrong. The police department didn’t admit any wrongdoing and maaaaybe at some point the victim will be compensated for their suffering, but I expect flock will not be on the hook for that. It will be the police department, which is funded by taxpayers.

              Reading your comments outside this thread, I think we would agree on a great many things and have interesting conversations. I didn’t intend to come across as snide, condescending or arrogant. You made the initial point, cassandra challenged you and I agreed with them, so I joined where they seemed not to.

              The “bizarre emotion reaction” is probably that I despise AI and want it nowhere near any decision-making capability. I think that as we embed “AI” in software, we will find that real people are put at more risk and that software companies will be able to deflect blame when things go wrong.

              • Credibly_Human@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                2 hours ago

                Personally I feel that the hate for AI is misplaced (mostly, as I do get there is a lot of nuance regarding peoples feelings on training sourcing etc). Partially because its such a wide catch all term, and then mostly, by far, because all of the problems with AI are actually just problems with the underlying crony capitalism in charge of its development right now.

                Every problem like AI “lacking empathy” is down to the people using it not caring to keep it out of places where it fails to accomplish such goals or where they are explicitly using it to strip people of their humanity; something that inherently lacks empathy.

                If you take away the horrible business motivations etc, I think its pretty undeniable AI is and will be a great technology for a lot of purposes and not for a lot of the ones its used for now (this continued idea that all UI can be replaced such that programmers wont be needed for specific apps and other such uses).

                Obviously we can’t just separate that but I think its important to think about especially regarding regulation. That’s because I believe that big AI currently is practically begging to be regulated such that the moat to create useful AI becomes so large that no useful open source general purpose AI tools can exist without corporate backing. That’s I think one of their end goals along with making it far more expensive to become a competitor.

                That being said this is a little bit out of hand in that this was about software in general, and regarding that and AI, I do believe empathy can be included, and built correctly, a computer system could have a lot more empathy than most human beings who typically only have meaningful empathy towards people they personally empathize with in their actions, which leads to awful systemic discrimination reinforcing practices.

                As for the flock example, I think its almost certain they got in with some backroom deals, and in a more fair world… where those still exist somehow, the police department would have a contract with some sort of stipulations regarding what happens with false identifications. The police officers also would not be traumatizing people over stolen property in the first place.

                That is all to say, I think that often when software is blamed, what should actually be blamed is the business goals that lead to the creation of that software and the people behind them. The software is after all automation of the will of the owners.