Whatever.
Whatever.
Gitea. Self-hosted.
🤘
Restrictiveness depends on page content and tasks realisation. I’ve tried different GTD software and always end up with outliner with tasks functions. Because you can’t pass whole context in task name only (which sometimes also limited in char count).
Currently I collect all my tasks from various JIRAs, Redmines and other trackers in Joplin, with repeating todos plugin it also helps me to track tasks like “check mitre for new cves in used software weekly”. I have good template for such task notes that keep all needed context for me when task repeats, e.g. a note to check on next repeat specific software for update. Agenda plugin shows (over)due tasks in sidebar so they’re always on track. You can do similar with Obsidian also, I just do not like it’s editor :).
BTW this is exactly why managers love JIRA and Confluence with their integration in each other. Feature’s wiki page will eventually contains links to tasks for neccessary context information.
Using ZFS on Proxmox for couple of years under different workloads (home servers, productions at job), it is very good.
Just tune it as you need :)
Well… You know… It’s kind of.
Go always with software RAID where possible to avoid vendor lock-in.
As developers team lead I want to sign under every word.
Except GTD. It isn’t working when you became manager, context switching between tasks is a real pain and I recommend to start using outliners with tasks functionality. Every note should be a big task (e.g. epic) with all timelines, links to followup notes and subtasks on it. Only in that case context switching will be a breeze - you just open a note or followup note (or email, whatever) and here you go, after 2 minutes you’re ready to rock!
And don’t forget that new task will fly in every hour or another!
Just try to use Joplin or Obsidian with tasks plugin for that.
Just search for “amd hdmi 2.1 linux” to get full story.
In short - AMD wrote an implememtation for HDMI 2.1 standart for Linux driver, but it requires approval from HDMI consortium. They (consortium) denied it, so AMD couldn’t ship it.
I never said that it is a virtualization. Yet for easy understanding I named created namespaces “virtualized”. Here I mean “virtualized” = “isolated”. Systemd able to do that with every process btw.
Also, some “smart individuals” called comtainerization as type 3 hypervisors, that makes me laugh so hard :)
It virtualises only parts of operating system (namely processes and network namespaces with ability to passthru devices and mount points). It is still using host kernel, for example.
Vikunja is the only viable thing. CalDAV spec allows to assign tasks, but we got XMPP moment here - spec exists, but almost no apps implement it properly.
Also you may take a look at https://github.com/awesome-selfhosted/awesome-selfhosted#task-management--to-do-lists
Take a look at HDMI versions in wikipedia and select cable accordingly.
Don’t go with “cheapest” advices as you might buy cable that only capable to FullHD@30Hz, which I think not what you want.
Also, if your card and TV have displayport I’ll suggest to go with that instead of HDMI. It just better and does not forbid open source realisations, like HDMI consortium does on HDMI 2.1.
Syncthing is a sync utility wich is different from a cloud service. They both have different purpose and are for different tasks.
They all do one thing - syncing files. And less painful implementation done by nextcloud, at least for me.
Using Caddy for couple of years already at home, yet using certbot at job, because of requirements to use nginx as balancer.
I’m using nextcloud for files and photos/videos sync from mobile, Joplin for notes and tasks, baikal for calendar (with sharing with my wife which using iOS/macOS).
There is nothing better than Nextcloud for files, I was trying to use syncthing and seafile - both sucks in one way or another.
Also, I was using vikunja for tasks but it’s UI and UX… Well, strange and not eye-candy. I hope someday they’ll rewrite it.
For me only the case of inability to reassemble RAID array on different server (with different controller or even without it) for data recovery shouts a big “NO” to any RAID controller at home lab.
While it is fun to have “industrial grade” thing, it isn’t fun to recover data from such arrays. Also, ZFS is a very good filesystem (imagine having 4.8 TB of data on 4 TB mirrored RAID. This is my case with zstd compression), but it isn’t playing well with RAID controllers. You’ll experience slowdowns and frequent data corruption.