• korazail@lemmy.myserv.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    I’m happy you provided a few examples. This is good for anyone else reading along.

    Equifax in 2017: Penalty was, let’s assume the worst case, 700$M. The company in 2017 made 3.3$B, and I’d assume that was after the penalty, but even if it wasn’t, that was a penalty of 27% of revenue. That actually seems like it would hurt.

    TSB in 2022: Fined ~48.6£M by two separate agencies. TSB made 183.5£M in revenue in 2022, still unclear if that was pre- or post- penalty, but this probably actually hurt.

    Uber in 2018: your link suggests Uber avoided any legal discovery that might have exposed their wrongdoing. There are no numbers in the linked article and a search suggest the numbers are not public. Fuck that. A woman was killed by an AI driven car and the family deserves respect and privacy, but uber DOES NOT. Because it’s not a public record, I can’t tell how much they paid out for the death of the victim, and since uber is one of those modern venture-capital-loss-leader companies, this is hard to respond to.

    I’m out of time – and won’t likely be able to finish before the weekend, so trying to wrap up – and Boeing seems complicated and I’m more familiar with Crowdstrike and I know they fucked up. In both cases, I’m not sure how much of a penalty they paid out relative to income.

    I’ll cede the point: There are some companies who have paid a price for making mistakes. When you’re talking companies, though, the only metric is money-paid/money-earned. I would really like there to be criminal penalties for leadership who chase profit over safety, so there’s a bit of ‘wishful thinking’ in my worldview. If you kill someone as a human being (or 300 persons, Boeing), you end up with years in prison, but company just pays 25% of it’s profit that year instead.

    I still think Cassandra is right, and that more often than not, software companies are not held responsible for their mistakes. And I think your other premise, that ‘if software is better at something’ carries a lot: Software is good at explicit computation, such as math, but is historically incapable of empathy (a significant part of the original topic… I don’t want to be a number in a cost/benefit calculation). I don’t want software replacing a human in the loop.

    Back to my example of a flock camera telling the police that a stolen car was identified… the software was just wrong. The police department didn’t admit any wrongdoing and maaaaybe at some point the victim will be compensated for their suffering, but I expect flock will not be on the hook for that. It will be the police department, which is funded by taxpayers.

    Reading your comments outside this thread, I think we would agree on a great many things and have interesting conversations. I didn’t intend to come across as snide, condescending or arrogant. You made the initial point, cassandra challenged you and I agreed with them, so I joined where they seemed not to.

    The “bizarre emotion reaction” is probably that I despise AI and want it nowhere near any decision-making capability. I think that as we embed “AI” in software, we will find that real people are put at more risk and that software companies will be able to deflect blame when things go wrong.

    • Credibly_Human@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      Personally I feel that the hate for AI is misplaced (mostly, as I do get there is a lot of nuance regarding peoples feelings on training sourcing etc). Partially because its such a wide catch all term, and then mostly, by far, because all of the problems with AI are actually just problems with the underlying crony capitalism in charge of its development right now.

      Every problem like AI “lacking empathy” is down to the people using it not caring to keep it out of places where it fails to accomplish such goals or where they are explicitly using it to strip people of their humanity; something that inherently lacks empathy.

      If you take away the horrible business motivations etc, I think its pretty undeniable AI is and will be a great technology for a lot of purposes and not for a lot of the ones its used for now (this continued idea that all UI can be replaced such that programmers wont be needed for specific apps and other such uses).

      Obviously we can’t just separate that but I think its important to think about especially regarding regulation. That’s because I believe that big AI currently is practically begging to be regulated such that the moat to create useful AI becomes so large that no useful open source general purpose AI tools can exist without corporate backing. That’s I think one of their end goals along with making it far more expensive to become a competitor.

      That being said this is a little bit out of hand in that this was about software in general, and regarding that and AI, I do believe empathy can be included, and built correctly, a computer system could have a lot more empathy than most human beings who typically only have meaningful empathy towards people they personally empathize with in their actions, which leads to awful systemic discrimination reinforcing practices.

      As for the flock example, I think its almost certain they got in with some backroom deals, and in a more fair world… where those still exist somehow, the police department would have a contract with some sort of stipulations regarding what happens with false identifications. The police officers also would not be traumatizing people over stolen property in the first place.

      That is all to say, I think that often when software is blamed, what should actually be blamed is the business goals that lead to the creation of that software and the people behind them. The software is after all automation of the will of the owners.