The Beehaw admins made this choice, and documented their rationale here: https://beehaw.org/post/567170
Just an explorer in the threadiverse.
The Beehaw admins made this choice, and documented their rationale here: https://beehaw.org/post/567170
What’s the network flow like? I’m posting this to the lemmy.ml /asklemmy community, but I’m composing it on the sh.itjust.works interface. I’m assuming sh.itjust.works hands this over to lemmy.ml. How does my browsing work? Is all of my traffic routed through sh.itjust.works?
sh.itjust.works
, that’s where all the info you care about resides. Your list of subscribed communities resides there. When you read a post, it gets fetched out of the db on sh.itjust.works
(irrespective of where the home instance for that post’s community is… when you read it it comes out of the database on your home instance), and when you comment on a post, that gets written to the db on your home instance. Your home instance a standalone fully functioning thing.sh.itjust.works
subscribe to the same community… there’s no incremental overhead. All ya’lls instance is ALREADY subscribed to that sub. So other users on your instance can sub to it for free, it’s already in the instance’s database.Assuming there’s a mass influx of redditors, what does it look like as things fail?
lemmy.ml
(where this community is homed) falls over from being overloaded or just is broken for whatever reason, your instance is unaffected. You can still read posts and make comments. This community however… is affected. New posts and comments for this community might come through intermitently or not at all for you (and everyone in the lemmyverse) because the community’s home server isn’t working well enough to reliably deliver them over federated replication. You can still read older posts and comments that have already been synced to your home instance, but new ones might not arrive. You might also see weird stuff like being able to see new comments from other sh.itjust.works
users on this community, since those get written to your db before getting federated back to the community’s home server. But mostly updates from other instances stop or get unreliable.sh.itjust.works
falls over for some reason… well… that sucks for you. You can’t log in or browse anything on it. You can still visit this sub at https://lemmy.ml/c/asklemmy/ as long as lemmy.ml
is working and you’ll be able to see the posts and comments that other accounts make. But you’ll be an anonymous read-only browser, you won’t be able to post or comment until sh.itjust.works
comes back online (or you make a new account elsewhere and lose all your comment history and subscription list).Are there easy mechanisms to allow me to grab my post history?
There’s a github issue for this, but it’s not done yet: https://github.com/LemmyNet/lemmy/issues/506.
I’m assuming most (all?) Lemmy servers are hosted in home labs?
I don’t think that’s a good assumption. lemmy.ml
is hosted on OVH, a cloud provider. My home instance on lemmy.world
is hosted by admins that run something like a 32 CPU mastodon instance. Most instances with over 100 users are running on some kind of probably modest but “real” cloud instance. The admins are volunteers, but often smart technical folks paying for small but real compute infrastructure.
The idea of Lemmy excites me, but the growth pain that could be coming scares me. Anybody using a CDN in front of their servers? That could be good, but with unconstrained growth, that could be costly, which is very bad.
Anticipating growing pains isn’t wrong, it’s probably gonna happen. But the devs are gonna find and work on the biggest performance problems so that people can viably run bigger instances, and instance admins are gonna run bigger hardware and ask for donations or run patreons to cover the cost. In my opinion, the bigger worry is that Lemmy will fizzle… not that it will spectacularly explode. As long as people join and contribute and are interested, we’ll find a way to improve scalability and performance. The death knell would be if people get bored and leave, but compute capacity won’t be the problem in that scenario.
I use k8s at work and have built a k8s cluster in my homelab… but I did not like it. I tore it down, and currently using podman, and don’t think I would go back to k8s (though I would definitely use docker as an alternative to podman and would probably even recommend it over podman for beginners even though I’ve settled on podman for myself).
Overall, the simplicity and lightweight resource consumption of podman/docker are are what I value at home. The extra layers of abstraction and constraints k8s employs are valuable at work, where we have a lot of machines and alot of people that must coordinate effectively… but I don’t have those problems at home and the overhead (compute overhead, conceptual overhead, and config-overhesd) of k8s’ solutions to them is annoying there.