cross-posted from: https://lemmy.nz/post/28397398

The suspension triggered strong responses across social media and beyond. Hashtags like #CancelDisneyPlus and #CancelHulu trended as users shared screenshots of their canceled subscriptions.

With cancellations surging, many subscribers reported technical issues. On Reddit’s r/Fauxmoi, one post read, “The page to cancel your Hulu/Disney+ subscription keeps crashing.”

  • Bongles@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    On one hand, could be a “crash”. On the other hand, tons of websites break when they get a little extra traffic.

    Side tangent, seems odd to me this is still a thing. Most company websites aren’t hosted on premises, so do these services like (i assume) AWS not scale for when there’s traffic? Squarespace has been advertising for years that it will scale up if there’s extra traffic. I’ve never tested it but still.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      I feel like Disney has internal stuff? I listened to a podcast where an ex employee changed the fonts on a bunch of stuff to be wingdings, etc, and made everything unusable.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      You have to design for scalability. Bottlenecks may be wherever. Even if their virtual server CPU and RAM can scale up, other stuff may be bottlenecks. Maybe the connection to the DB. Maybe the DB is elsewhere and doesn’t scale. Can’t really reasonably guess from the outside.

      Mass cancellation is not usually a thing they would design around bottle-necks. It also doesn’t add value to them.

    • okmko@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 hours ago

      It could also be poor graceful failure. What we see as a crash may be from some unavailability deep in a long pipeline of services.

    • DreamlandLividity@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      If your page is just static, e.g. no login, no interaction, everyone always sees the same thing then it scales easily. Scaling means you copy the site to more servers. Now imagine a user adds a comment. Now you need to add the comment to every copy of your site, so that everyone sees it regardless of which server they use. So a comment creates more work the more servers you use. And this is where scaling becomes a complex science, that you need to manually prepare for as a software developer. You need to figure out what data will be stored where and accessed how.

      • BCsven@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        Caching servers, they self replicate when a change is committed, then send back a signal to main server that task has completed

          • BCsven@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 hours ago

            Oh right, I skipped a part. It is not really a dev complexity prep issue. You build the database that serves the comments etc in as of in one place, then you deploy cache servers for scaling. They self replicate, so a comment in California gets commited to the dbase, the server in new York pulls the info over from the Cali change, it sends back that it is synced with the change. And vice versa. The caching servers do the work, not your program.

            • DreamlandLividity@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 hours ago

              That entirely depends on your application. What you described is one possible approach, that will only work in specific circumstances.

              • BCsven@lemmy.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Besides application specifics, its how the internet works currently to give low latency. AWS, Azure, Linode etc have data centers across the globe to replicate data near where the people are.