Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…

  • falynns@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    7 days ago

    My biggest problem is every docker image thinks they’re a unique snowflake and how would anyone else be using such a unique port number like 80?

    I know I can change, believe me I know I have to change it, but I wish guides would acknowledge it and emphasize choosing a unique port.

    • unit327@lemmy.zip
      link
      fedilink
      English
      arrow-up
      38
      ·
      7 days ago

      Most put it on port 80 with the perfectly valid assumption that the user is sticking a reverse proxy in front of it. Container should expose 80 not port forward 80.

      • PieMePlenty@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        6 days ago

        There are no valid assumptions for port 80 imo. Unless your software is literally a pure http server, you should assume something else has already bound to port 80.
        Why do I have vague memories of Skype wanting to use port 80 for something and me having issues with that some 15 years ago?
        Edit: I just realized this might be for containerized applications… I’m still used to running it on bare metal. Still though… 80 seems sacrilege.

    • lilith267@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      7 days ago

      Containers are ment to be used with docker networks making it a non-issue, most of the time you want your services to forward 80/443 since thats the default port your reverse proxy is going to call

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 days ago

      Why expose any ports at all. Just use reverse proxy and expose that port and all the others just happen internally.

  • zen@lemmy.zip
    link
    fedilink
    English
    arrow-up
    20
    ·
    7 days ago

    Yes, I get lab burnout. I do not want to be fiddling with stuff after my day job. You should give yourself a break and do something else after hours, my dude.

    BUT

    I do not miss GUIs. Containers are a massive win in terms because they are declarative, reproducible, and can be version controlled.

    • mrnobody@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Yeah, since Christmas, I more it sounds silly, but I’ve been playing a ton of video games with my kids lol. But not like CoD, more like Grounded 2, Gang Beasts, and Stumble Guys lmao

      • zen@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        You’re doing i right. Playing cool games with your kids sounds like a blast and some great memories :)

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    I don’t run a service unless it has reasonably good documentation. I’ll go through it first and make sure I understand how it’s supposed to run, what port(s) are used, and if I have an actual, practical use case for it.

    You’re absolutely correct in that sometimes the documentation glosses over or completely omits important details. One such service is Radicale. The documentation for running a Docker container is severely lacking.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    7 days ago

    You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I’d like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just “make bounce” and the software starts up without me having to remember all the app-specific commands and configs.

    If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called “up next” that details where you’re running into challenges and need to make improvements.

    • clif@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it’s for me. It’s a historical record of my projects and the steps and problems experienced when setting them up.

      I’m using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.

      If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that’s the goal.

    • 123@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 days ago

      I found a git repo with docker compose and the config files works well enough as long as you are willing to maintain a backup of the volumes and an .env file on KeePass (also backed up) for anything that might not be OK on a repo (even if private) like passwords and keys.

  • BrightCandle@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    7 days ago

    I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Databases.

      I ran PaperlessNGX for a while, everything is fine. Suddenly I realize its version of Postgresql is not supported anymore so the container won’t start.

      Following some guides, trying to log into the container by itself, and then use a bunch of commands to attempt to migrate said database have not really worked.

      This is one of those things that feels like a HUGE gotcha to somebody that doesn’t work with databases.

      So the container’s kinda just sitting there, disabled. I’m considering just starting it all fresh with the same data volume and redoing all that information, or giving this thing another go…

      …But yeah I’ve kinda learned to hate things that rely on database containers that can’t update themselves or have automated migration scripts.

      I’m glad I didn’t rely on that service TOO much.

      • BrightCandle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        Its a big problem. I also dump projects that don’t automatically migrate their own SQLite scehema’s requiring manual intervention. That is a terrible way to treat the customer, just update the file. Separate databases always run into versioning issues at some point and require manual intervention and data migration and its a massive waste of the users time.

  • Dylancyclone@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    7 days ago

    If you’ll let me self promote for a second, this was part of the inspiration for my Ansible Homelab Orchestration project. After dealing with a lot of those projects that practically force you to read through the code to get a working environment, I wanted a way to reproducably spin up my entire homelab should I need to move computers or if my computer dies (both of which have happened, and having a setup like this helped tremendously). So far the ansible playbook supports 117 applications, most of which can be enabled with a single configuration line:

    immich_enabled: true
    nextcloud_enabled: true
    

    And it will orchestrate all the containers, networks, directories, etc for you with reasonable defaults. All of which can be overwritten, for example to enable extra features like hardware acceleration:

    immich_hardware_acceleration: "-cuda"
    

    Or to automatically get a letsencrypt cert and expose the application on a subdomain to the outside world:

    immich_available_externally: true
    

    It also comes with scripts and tests to help add your own applications and ensure they work properly

    I also spent a lot of time writing the documentation so no one else had to suffer through some of the more complicated applications haha (link)

    Edit: I am personally running 74 containers through this setup, complete with backups, automatic ssl cert renewal, and monitoring

      • Dylancyclone@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        6 days ago

        No that’s totally fair! I’m a huge fan of making things reproducible since I’ve ran into too many situations where things need to be rebuilt, and always open to ways to improve it. At home I use ansible to configure everything, and at work we use ansible and declare our entire Jenkins instance as (real) code. I don’t really have the time for (and I’m low-key scared of the rabbit hole that is) Nix, and to me my homelab is something that is configured (idempotently) rather than something I wanted to handle with scripts.

        I even wrote some pytest-like scripts to test the playbooks to give more productive errors than their example errors, since I too know that pain well :D

        That said, I’ve never heard of PyInfra, and am definitely interested in learning more and checking out that talk. Do you know if the talk will be recorded? I’m not sure I can watch it live. Edit: Found a page of all the recordings of that room from last year’s event https://video.fosdem.org/2025/ua2220/ So I’m guessing it will be available. Thank you for sharing this! :D

        I love the “Warning: This talk may cause uncontrollable urges to refactor all your Ansible playbooks” lol I’m ready

    • WhiteOakBayou@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      That’s neat. I never gave ansible playbooks any thought because I thought it would just add a layer of abstraction and that containers couldn’t be easier but reading your post I think I have been wrong.

  • moistracoon@lemmy.zip
    link
    fedilink
    English
    arrow-up
    10
    ·
    7 days ago

    While I am gaining plentiful information from this comments section already, wanted to add that the IT brain drain is real and you are not alone.

    • mrnobody@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Haha, thanks! It’s probably more problematic being a solo IT guy as it feels like I don’t always have did dedicated time to get projects done. Part of why my lab is overkill is because I want something at work, so I spend a little time at home figuring stuff out, but, you know, family time n all…

      Its still fun mostly, but work keeps assuming I must’ve freed up a lot of time in automating or improving stability so I keep being rewarded with more work outside of IT.

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    7 days ago

    My advice is : just use Nix.

    It always works. It does all the steps for you. You will never “forget a step” because either someone has already made a package, or you just make your own that has all the steps, and once that works, it works literally forever.

      • Prontomomo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        I just set up something for my sibling, and had to make it super easy. I’ve thought about yuno host, but I ended up using runtipi because it does use docker underneath it all but you don’t ever have to see that.

        From my limited experience it was super easy and a pleasure to use, I’m considering using it instead of my current portainer setup.

  • Da Oeuf@slrpnk.net
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    Check out the YUNOhost repos. If everything you need is there (or equivalents thereof), you could start using that. After running the installation script you can do everything graphically via a web UI. Mine runs for months at a time with no intervention whatsoever. To be on the safe side I make a backup before I update or make any changes, and if there is a problem just restore with a couple of clicks via my hosting control panel.

    I got into it because it’s designed for noobs but I think it would be great for anyone who just want to relax. Highly recommend.

    • mrnobody@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Apparently I’m more than noob level 😅 every time I try to get to Traccar, I get my gateway’s landing page.

      Regular Traccar uses port 8082 for the web and 5055 for app. I cannot get that either through domain (gateway) or lan IP (yunohost)

      Normally I’d go 1.2.3.4:8082 (not my real lan IP) but Yuno seems to ignore that.

      I’ll do some more digging when I get home, I’m at work with broken concentration

  • HamsterRage@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.

    It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.

    Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.

    The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an “IT Guy”, although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I’ve been doing this for a while.

    I really don’t know how people without a similar level of experience can even begin to cope.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    7 days ago

    I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

    Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      This is a crazy take. Docker doesn’t involve much overhead. I’m not sure where your 150GB hard drive space commend comes from, as I just dozens of containers on machines with 30-50GB of hard drive space. There’s no nested computer, as docker containers are not virtualization. Containers have nothing to do with a single projects “dependency hell”, they’re for your dependency hell when trying to run a bunch of different services on one machine, or reproducing them quickly and easily across machines.

    • zen@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      Docker in and of itself is not the problem here, from my understanding. You can and should trim the container down.

      Also it’s not a “whole nested computer”, like a virtual machine. It’s only everything above the kernel, because it shares its kernel with the host. This makes them pretty lightweight.

      It’s sometimes even sometimes useful to run Rust or C++ code in a Docker container, for portability, provided you of course do it right. For Rust, it typically requires multiple build steps to bring the container size down.

      Basically, the people making these Docker containers suck donkey balls.

      Containers are great. They’re a huge win in terms of portability, reproducibility, and security.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Yeah, I’m not against the idea philosophically. Especially for security. I love the idea of containerized isolation.

        But in reality, I can see exactly how much disk space and RAM and CPU and bandwidth they take, heh. Maintainers just can’t help themselves.

        • NewNewAugustEast@lemmy.zip
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 days ago

          Want to mention some? I have no containers using that at all.

          Perhaps you never clean up as you move forward? It’s easy to forget to prune them.

          • zen@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            Yep and I also want to add that you can use compose.yml to limit the CPU and RAM utilisation of each container, which can help in some cases.

    • unit327@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    8
    ·
    7 days ago

    If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.

    I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

    Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.

    That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole

    • mrnobody@reddthat.comOP
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      7 days ago

      I agree with that 3rd paragraph lol. That’s probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn’t work and you have to troubleshoot that now too?

      Without going into too much detail, I’m a solo operation guy for about 200 end users. We’re a Win11 and Office shop like most, and I’ve upgraded pretty much every system since my time starting. I’ve utilized some self-host options too, to help in the day to day which is nice as it offloads some work.

      It’s just, especially after a long day, to play IT at home can be a bit much. I don’t normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple’s and oranges sometimes. Maybe I’m getting spoiled with all the turn key stuff at work, too.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        7 days ago

        I’m an infrastructure guy, I manage a few datacenters that host some backends for ~100,000 IoT devices and some web apps that serve a few million requests a day each. It sounds like a lot, but the only real difference between my work and yours is that at the scale I’m working with, things have to be built in a way that they run uninterrupted with as little interaction from me as possible. You see fewer GUIs, and things stop being super quick and easy to initially get up and running, but the extra effort spent architecting things right rewards you with a much lighter troubleshooting and firefighting workload.

        You sorta stop being a mechanic that maintenances and fixes problem cars, and start being an engineer that builds cars to have as few problems as possible. You lose the luxury of being able to fumble around under a car and visually find an oil filter to change, and start having to make decisions on where to put the oil filter from scratch, but to me it is far more rewarding and satisfying. And ultimately the way that self hosting works these days, it has embraced the latter over the former. It’s just a different mindset from the legacy click-ops sysadmin days of IT.

        What this looks like to me in your example is, when I have users of my selfhosted stuff complain about something not working, I’m not envisioning yet another car rolling into the shop for me to fix. I envision a puzzle that must be solved. Something that needs optimization or rearchitecting that will make the problem that user had go away, or at the very least fix itself, or alert me so I can fix it before the user complains.

        This paradigm I work under is more work, but the work is rewarding and it’s “fun” when I identify a problem that needs solving and solve it. If that isn’t “fun” to you, then all you’re left is the bunch more work part.

        So ultimately what you need to figure out is what your goal is. If you’re not interested in this new paradigm and you just want turnkey solutions there are ways of self hosted that are more suited to that mindset. You get less flexibility, but there’s less work involved. And to be clear there’s absolutely nothing wrong with that. At the end of the day you have to do what works for you.

        My recommendations to you assuming you just want to self hosted with as little work and maintenance as possible:

        • Stick with projects that are simple to set up and are low maintenance. If a project seems like a ton of work get going, just don’t use it. Take the time to shop around for something simpler. Even I do this a lot.
        • Try some more turn key self hosting solutions. Anything with an App Store for applications. UnRAID, CasaOS, things of that nature that either have one click deploy apps, or at least have pre-filled templates where all you need to do is provide a couple variable values. You won’t learn as much career wise this way, but it’ll take a huge mental load off.
        • When it comes to tools your family is likely to depend on and thus complain about, instead of selfhosting those things perhaps look for a non-big tech alternative. For example, self hosting email can be a lot of work. But you don’t have to use Gmail either. Move your family to ProtonMail or Tutanota, or other similar privacy friendly alternatives. Leave your self hosting for less critical apps that nobody will really care if it goes down and you can fix at your leisure.
    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      7 days ago

      That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole.

      Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

      I work in IT and like most we’re also a Windows shop. I have zero professional experience with Linux but I’m learning through my home lab while simultaneously trying extract myself from the privacy cluster fuck that is the current consumer tech industry. It’s a transition and the documentation I find more or less matches the OPs experience.

      I research, pick what seems to be the best for my situation (often most popular), get it working with sustainable, minimal complexity, and in short time find that some small, vital aspect of its setup (like reverse proxy) has literally zero documentation for getting it to work with some other vital part of my setup. I guess I should have made a better choice 18 months ago when I didn’t expect to find this new service accessible. I find some two year old Github issue comment that allegedly solves my exact problem that I can’t translate to the version I’m running because it’s two revisions newer. Most other responses are incomplete, RTFM, or “git gud n00b”, like your response here

      Wherever you work, whatever industry, you can get burnt out. It’s got nothing to do with if you’ve “got what it takes” or whatever bullshit you think “you’re in the wrong field of work and you’re trying to jam a square peg in a round hole” equates to.

      I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

      If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        7 days ago

        You’ve completely misread everything I’ve said.

        Let’s make a few things clear here.

        My response is not “Git gud”. My response is that sometimes there are selfhosted projects that are really cool and many people recommend, but the set up for them is genuinely more complex than it should be, and you’re better off avoiding them instead of banging your head against a wall and stressing yourself out. Selfhosting should work for you, not against you. You can always take another crack at a project later when you’ve got more hands on experience.

        Secondly, it’s not a matter of whether OP “has what it takes” in his career. I simply pointed out the fact that everything he seems to hate about selfhosting, are fundamental core principals of working in IT. My response to him isn’t that he can’t hack it, it seems more like he just genuinely doesn’t like it. I’m suggesting that it won’t get better because this is what IT is. What that means to OP is up to him. Maybe he doesn’t care because the money is good which is valid. But maybe he considers eventually moving into a career he doesn’t hate, and then the selfhosting stuff won’t bother him so much. As a matter of fact, OP himself didn’t take offense to that suggestion the way you did. He agreed with my assessment.

        As you learn more about self hosting, you’ll find that certain things like reverse proxy set up isn’t always included in the documentation because it’s not really a part of the project. How reverse proxies (And by extension http as a whole) work is a technology to learn on its own. I rarely have to read documentation on RP for a project because I just know how reverse proxying works. It’s not really the responsibility of a given project to tell you how to do it, unless their project has a unique gotcha involved. I do however love when they do include it, as I think that selfhosting should be more accessible to people who don’t work in IT.

        If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

        Most of them TBH. I often don’t engage with a project that involves me cloning a repo because I know it means it’s going to be a finicky pain in the ass. But most things I set up were done in less than 20 minutes, including secure access from the internet using a VPS proxy with a WAF and CrowdSec, and integration with my SSO. If you want to share with me your common pain points, or want an example of what my workflow looks like let me know.

        • theparadox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 days ago

          I’ve misread the tone, I agree. I apologize for that. However, I find that his complaints were not about things that are always “fundamental core principals of working in IT”. For some, sure, but where I work I’m by far the employee with the most familiarity with CLI/powershell and scripting. Almost everything is done via a GUI or web interface if it can be. I would tell any of my coworkers that maybe IT isn’t for them.

          I also, in a rush to finish, misremembered and incorrectly reread some of your words too quickly. You did not recommend the “clone a repo” solutions, you advised against them. Again, I apologize. I still am suspicious of this massive collection of self hosted services that work perfectly with each other after like 20 minutes of tweaking and little maintenance. That was what I was trying to imply with that section. I’ve lost close to a dozen 6-10 hour sessions on Saturdays pulling my hair out because I can’t seem to find out how to do some specific things that it seems like I need to do to make some “easy” new service to work with my setup. It’s like that Malcom in the Middle (?) clip of the dad 5 projects deep at the end of the day trying to fix some simple problem in the morning.

          I’ll try to document some of my issues this weekend. I would honestly appreciate any help or recommendations.

          • Encrypt-Keeper@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            7 days ago

            For some, sure, but where I work I’m by far the employee with the most familiarity with CLI/powershell and scripting. Almost everything is done via a GUI or web interface if it can be.

            I don’t mean this in a disparaging way because I too got my start in an environment like that, but that’s a very legacy environment. When I talk about core principles of working in IT, I mean the state of IT today in 2026, as well as where it’s headed in the future. It sounds like your workplace is one of those SMBs that’s still stuck in the glory days. Thats not what IT is it’s what IT was. And so unless you’re currently end of career, you’re going to have to give that up and embrace this new paradigm or be washed out eventually. So when I say “It isn’t the field for you” in the context of OP I just mean that it isn’t going to get better. It’ll be less and less like the way you know it every day, and more and more like the way OP doesn’t like it.

            For example you say you are the most familiar in your entire workplace with “powershell and scripting”, however I literally got teased just the other day by solving a niche problem with a powershell script. “How very 2010 of you”.

            I don’t say this to belittle you, as I was the same guy as you not too many years ago. And I get that you’re banging your head against this new paradigm, but this is the stuff you really do want to stick with IF it’s your goal to grow in IT long term. It will click eventually given enough time. I am definitely willing to help you with any questions you might have or perhaps if I have time I can try and demonstrate my workflow for a standard container deployment.

            Some questions I would ask you are

            • How are you running your docker containers? Run commands? Compose? Portainer or some alternative?
            • are you trying to expose them to the internet, or only internally?
            • do you use a reverse proxy or are you just exposing direct ports and connecting that way?
            • do you have an example of a specific project you struggled to get running?
      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

        why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.

        but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.

        working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
        in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.

        • theparadox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          IT is a very wide field, and maybe that generalization is actually not good

          That was what set me off. I was having a bad morning and misread the tone to be more dismissive than it likely was.

  • fozid@feddit.uk
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    7 days ago

    🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.

    The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it’s solid, implement the next tool or deployment.

    My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.

    Although I don’t work in IT so maybe the small bits of maintenance I actually do feel less to me?

    I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.

    • youmaynotknow@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      7 days ago

      I love cli and config files, so I can write some scripts to automate it all.
      It documents itself.
      Whenever I have to do GUI stuff I always forget a step or do things out of order or something.

      • fozid@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        exactly this! notes in the config files is all the documentation i need. and scripting and automating is so important to a self running and self healing server.