awesome-selhosted maintainer here. This critique comes up often (and I sometimes agree…) but it’s hard to properly “fix”:
Any rule that enforces some kind of “quality” guideline has to be explicitly written to the contribution guidelines to not waste submitters’ (and maintainers) time.
As you can see there are already minimal rules in place (software has to be actively maintained, properly documented, first release must be older than 4 months, must of course be fully Free and Open-source…). Anything more is very hard to word objectively or is plain unfair - in the last 7 years (!) maintaining the list I’ve spent countless hours thinking about it.
For example, rejecting new projects because an existing/already listed one effectively does the same thing would give an unfair advantage to older projects, effectively “locking out” newer ones. Moreover, you will rarely find two projects that have the exact same feature set, workflow, release frequency, technical requirements… and every user has different needs and requirements, so yeah, users of the list are expected to do some research to find the best solution to their particular needs.
This is of course, less true for some categories (why are there so many pastebins??). But again, it’s hard to find clear and objective criteria to determine what deserves to be listed and what does not.
If we started rejecting projects because “I don’t have a need for it” or “I already use a somewhat equivalent solution and am not going to switch”, that would discard 90% of entries in the list (and not necessarily the worst ones). I do check that projects being added are in a “production-ready” state and ask more questions during reviews if needed. But it’s hard to be more selective than we already are, without falling in subjective “I like/I don’t like” reasoning (let’s ban all Nodejs-based projects, npm is horrible and a security liability. Let’s also ban all projects that are so convoluted and impossible to build and install properly that Docker is the only installation option. Follow my thoughts?)
Also, Free Software has always been very fragmented, which is both a strength and a weakness. The list simply reflects that.
Another idea I contemplated is linking each project to a “review” thread for the software in question. But I will not host or moderate such a forum/review board, and it will be heavily brigaded by PR departments looking to promote their companies software.
A HTML version is coming out soon (based on the same data) that will hopefully make the list easier to browse.
I am open to other suggestions, keeping in mind the points above…
250+ self hostable apps
1268 exactly.
You can help cleaning up the list of unmaintained projects by working on this issue
I would imagine the source for most projects is hosted on GitHub, or similar platforms? Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes. I would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically. It would be kind of interesting to tag “releases” and see how the sort order changes over time. If you wanted to get fancy, the sorting could probably happen as part of a CI task.
If workable, the obvious benefit is you don’t have to exclude anything for subjective reasons, but it’s easier for readers of the list to quickly find the “most used” options.
Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.
Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes.
it’s easier for readers of the list to quickly find the “most used” options.
This would exclude (or move to the bottom of the list) all projects that are not hosted on these (mostly proprietary) platforms. Right now only metadata from Github is being parsed, in the future it will expand to Gitlab, maybe Gitea instances or similar, but it will take time and not all platforms have these stars/followers/forks features. This would also induce a huge bias as Github projects will have a lot more forks/followers/… than projects hosted on independent forges. Star counts can also (and absolutely are) manipulated by some projects that want to get “trending”.
Also popularity != quality. A project whose code is hosted on cgit can be as good or even better than a project on Github (even more in the context of self-hosting…).
Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.
It was a good idea :) But as you can see, it has its flaws.
In this analogy, GitHub would be the library and the awesome list would be the recommended by the librarian section. If my librarian stopped curating that section and just filled it with a specific type of book no matter the quality I would stop browsing their curated section.
Are you disagreeing with them and saying OP list contains only curated awesome projects?
Do you really need 13 blog platforms? By that point, don’t I have to do another analysis and curation to decide what to use? With the generic descriptions that seem to be copy-paste from the projects descriptions, where’s the descriptive and usefulness-assessment of curation? If one of the 13 is “Extra-awesome, extra-lightweight blog engine.” - why are there even others if it’s “extra awesome”? What does that even mean?
All 13 blog platform could be awesome and you’d want all 13 on the list because while all awesome, they’re awesome in different ways. They each have different workflows even which is something that’s really important to someone writing blogs. Do you want a WYSIWYG editor? Do you prefer Markdown? Do you want a static site generator? All these things are awesome and each fit a distinct use case.
Lol I’m so surprised some self hosters are so lazy they can’t even make a selection out of a few already curated items. Like would you rather have no options at all (use this ONE project, nothing else matters) or not even have the list and have to find the projects on your own?
I really just don’t understand why we feel the need to be pedantic on the linguistics of a list of resources someone found to be (subjectively, and inherently) “awesome”
Do you think it’s fair to say that one size does not fit all, so a giant list of self hosted options for people to choose from, is itself awesome?
The benefit of self hosting boils down to being able to make your own choices. Having a full list of options to choose from and make your own decision fits this community better (for better or worse) than someone else curating that thing for you.
I miss the days when awesome lists were curated to actually have awesome stuff instead of being a list of 250+ self hostable apps.
There is no way these are all awesome. Call it the giant list of self hosted apps or something that actually makes sense.
awesome-selhosted maintainer here. This critique comes up often (and I sometimes agree…) but it’s hard to properly “fix”:
Any rule that enforces some kind of “quality” guideline has to be explicitly written to the contribution guidelines to not waste submitters’ (and maintainers) time.
As you can see there are already minimal rules in place (software has to be actively maintained, properly documented, first release must be older than 4 months, must of course be fully Free and Open-source…). Anything more is very hard to word objectively or is plain unfair - in the last 7 years (!) maintaining the list I’ve spent countless hours thinking about it.
For example, rejecting new projects because an existing/already listed one effectively does the same thing would give an unfair advantage to older projects, effectively “locking out” newer ones. Moreover, you will rarely find two projects that have the exact same feature set, workflow, release frequency, technical requirements… and every user has different needs and requirements, so yeah, users of the list are expected to do some research to find the best solution to their particular needs.
This is of course, less true for some categories (why are there so many pastebins??). But again, it’s hard to find clear and objective criteria to determine what deserves to be listed and what does not.
If we started rejecting projects because “I don’t have a need for it” or “I already use a somewhat equivalent solution and am not going to switch”, that would discard 90% of entries in the list (and not necessarily the worst ones). I do check that projects being added are in a “production-ready” state and ask more questions during reviews if needed. But it’s hard to be more selective than we already are, without falling in subjective “I like/I don’t like” reasoning (let’s ban all Nodejs-based projects, npm is horrible and a security liability. Let’s also ban all projects that are so convoluted and impossible to build and install properly that Docker is the only installation option. Follow my thoughts?)
Also, Free Software has always been very fragmented, which is both a strength and a weakness. The list simply reflects that.
Another idea I contemplated is linking each project to a “review” thread for the software in question. But I will not host or moderate such a forum/review board, and it will be heavily brigaded by PR departments looking to promote their companies software.
A HTML version is coming out soon (based on the same data) that will hopefully make the list easier to browse.
I am open to other suggestions, keeping in mind the points above…
1268 exactly.
You can help cleaning up the list of unmaintained projects by working on this issue
I would imagine the source for most projects is hosted on GitHub, or similar platforms? Perhaps you could consider forks, stars, and followers as “votes” and sort each sub category based on the votes. I would imagine that would be scriptable - the script could be included in the awesome list repo, and run periodically. It would be kind of interesting to tag “releases” and see how the sort order changes over time. If you wanted to get fancy, the sorting could probably happen as part of a CI task.
If workable, the obvious benefit is you don’t have to exclude anything for subjective reasons, but it’s easier for readers of the list to quickly find the “most used” options.
Just an idea off the top of my head. You may have already thought about it, and/or it may be full of holes.
The next version of the list will be based on https://github.com/awesome-selfhosted/awesome-selfhosted-data (raw YAML data), so much easier to integrate with scripts. There is already a CI system running at https://github.com/awesome-selfhosted/awesome-selfhosted-data/actions, and a preview of an enriched export at https://nodiscc.github.io/awesome-selfhosted-html-preview/ that take stars/last update dates and other metadata into account. This will all go live “soon”.
This would exclude (or move to the bottom of the list) all projects that are not hosted on these (mostly proprietary) platforms. Right now only metadata from Github is being parsed, in the future it will expand to Gitlab, maybe Gitea instances or similar, but it will take time and not all platforms have these stars/followers/forks features. This would also induce a huge bias as Github projects will have a lot more forks/followers/… than projects hosted on independent forges. Star counts can also (and absolutely are) manipulated by some projects that want to get “trending”.
Also popularity != quality. A project whose code is hosted on cgit can be as good or even better than a project on Github (even more in the context of self-hosting…).
It was a good idea :) But as you can see, it has its flaws.
It’s covers a pretty wide range of topics hence the bigger number…
Either way, would you refuse a library because it had too many books or would you use a searching/organizational system to locate what you want
In this analogy, GitHub would be the library and the awesome list would be the recommended by the librarian section. If my librarian stopped curating that section and just filled it with a specific type of book no matter the quality I would stop browsing their curated section.
Again, large size doesn’t necessarily equate to being washed out…
Are you disagreeing with them and saying OP list contains only curated awesome projects?
Do you really need 13 blog platforms? By that point, don’t I have to do another analysis and curation to decide what to use? With the generic descriptions that seem to be copy-paste from the projects descriptions, where’s the descriptive and usefulness-assessment of curation? If one of the 13 is “Extra-awesome, extra-lightweight blog engine.” - why are there even others if it’s “extra awesome”? What does that even mean?
All 13 blog platform could be awesome and you’d want all 13 on the list because while all awesome, they’re awesome in different ways. They each have different workflows even which is something that’s really important to someone writing blogs. Do you want a WYSIWYG editor? Do you prefer Markdown? Do you want a static site generator? All these things are awesome and each fit a distinct use case.
See my reply above (https://lemmy.world/comment/1592102), that’s exactly what is hard to determine objectively.
Yep, you do.
Lol I’m so surprised some self hosters are so lazy they can’t even make a selection out of a few already curated items. Like would you rather have no options at all (use this ONE project, nothing else matters) or not even have the list and have to find the projects on your own?
I really just don’t understand why we feel the need to be pedantic on the linguistics of a list of resources someone found to be (subjectively, and inherently) “awesome”
Grumpy cat energy begone
I agree with your point, interesting to see people downvoting without commenting.
Do you think it’s fair to say that one size does not fit all, so a giant list of self hosted options for people to choose from, is itself awesome?
The benefit of self hosting boils down to being able to make your own choices. Having a full list of options to choose from and make your own decision fits this community better (for better or worse) than someone else curating that thing for you.