I thought I’ll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!

I’ll try my best to answer any questions here, but I hope others in the community will contribute too!

  • Tovervlag@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      pxe net boot

      set up a pxe boot server, set all computers to be imaged to boot over pxe, point them at the server and away you go

        • bloodfart@lemmy.ml
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          maybe have your pxe boot service on a vlan or something at least.

          at least a decade ago some stuff you wouldn’t expect will just connect up to any old server and accept any old image it’s offering with no authentication or checks whatsoever. it’s annoying when a power outage knocks everything down and some equipment comes up with a different hat on.

  • snooggums@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I have windows PC with 6 drives, mostly SSD and on HDD that I assume are all NTFS. Two of the drives are nvme(?) attached to the mobo, and I only have one mobo with nvme slots. I have a number of older boards that top out at SATA connections.

    If I install Linux Mint, can I format one nvme drive with whatever the current preferred linux formatting is, install Mint, and move the files from the other drives around as I format each one?

    Or do I need to move all the data I want to keep to SATA drives, put them in a different windows box, and then copy them over using a network connection?

    It’s been a while and I’m guessing my lack of finding an answer means linux still doesn’t work with NTFS enough to do what I’m thinking of.

    • Nibodhika@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      I was read/writing on NTFS partitions back in 2004, so your information that Linux doesn’t work with NTFS is at least 20 years old.

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      linux can read and write ntfs, edit partition tables and resize ntfs partitions

      you could (theoretically, do not do this!) free up 8gb of space on your ssd in windows, defragment it then boot a linux installer and use it to shrink the ntfs partition and install ilnux in that 8gb.

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      You can freely manipulate NTFS in Linux. Just make sure your distribution has, after kernel >=5.15, enabled it, otherwise you may need to install the ntfs-eg driver. Other than that, Ach Wiki has info that may help you on any distro:

      https://wiki.archlinux.org/title/NTFS

      I have done something similar to what you want to do, just needed the ntfs-3g driver installed and “Disks” (gnome disks) application would mount/read/write the disks as usual

    • NateSwift@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      It depends on exactly how you plan to do things. The Linux kernel supports reading NTFS but not writing to it. I’m not sure exactly how full your drives are, but you might be able to consolidate some before installing Linux.

      There are a couple utilities that let your mount an NTFS file system for read & write, but I wouldn’t trust them for important data.

      • d3Xt3r@lemmy.nzM
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        The Linux kernel supports reading NTFS but not writing to it.

        That’s not true. Since kernel 5.15, Linux uses the new NTFS3 driver, which supports both read and write. And performance wise it’s much better than the old ntfs-3g FUSE driver, and it’s also arguably better in stability too, since at least kernel 6.2.

        Personally though, I’d recommend being on 6.8+ if you’re going to use NTFS seriously, or at the very least, 6.2 (as 6.2 introduces the mount options windows_names and nocase). @snooggums@midwest.social

      • snooggums@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        As long as I can read from the second nvme drive I have enough total space to easily shuffle around.

        My issue was that I couldn’t fit everything onto just the SSDs at the same time.

        • NateSwift@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          Reading works great! If you need to mount the drive manually (IIRC Mint should do this for you) you’ll need to specify that it’s NTFS instead of it automatically detecting the file system but other than that it’s just plug and play

  • TheHarpyEagle@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    What is the practical difference between Arch and Debian based systems? Like what can you actually do on one that you can’t on the other?

  • wolf@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    How do I enable DNS over HTTPS or DNS over TLS for all connections in NetworkManager in Debian 12?

    It is easy to configure custom DNS servers for all connections via a new .conf file in /etc/NetworkManager/conf.d with a servers=8.8.8.8 entry in the [global-dns-domain-*] section.

    How can I configure NetworkManager to use DNS over HTTPS or DNS over TLS via a conf file?

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I have had issues with using a NAS over SMB because of some malarky about reverting to SMB 1.0 or something. Dunno; I stopped backing up to my NAS and just use external drives.

  • PseudoSpock@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    How can I hide a pinned post without blocking the poster? It bothers me having this at the top of my list all the time, like some reminder on my phone I can’t ack and make go away.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Expanding on the other explanations. On Windows, it’s fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.

      On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution’s package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.

      So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it’s not just one big self contained package you drop in C:\Program Files. Linux follows the FSH which roughly defines where things should be. Binaries go to /usr/bin, libraries to /usr/lib, shared files go to /usr/shared. A bunch of those locations are somewhat special, for example .desktop files in /usr/share/applications show up in the menu to launch them. That said Linux does have a location for big standalone packages: that’s usually /opt.

      There’s advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Someone already gave an answer, but the reason it’s done that way is because on Linux, generally programs don’t install themselves - a package manager installs them. Windows (outside of the windows store) just trusts programs to install themselves, and include their own uninstaller.

    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.

      If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.

      • krash@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.

          • teawrecks@sopuli.xyz
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            In /etc? Are you sure? /usr/share/applications has your system-wide .desktop files, (while .local/share/applications has user-level ones, kinda analogous to installing a program to AppData on Windows). And .desktop files could be interpreted at a high level as an “app”, even though they’re really just a simple description of how to advertise and launch an application from a GUI of some kind.

              • teawrecks@sopuli.xyz
                link
                fedilink
                arrow-up
                0
                ·
                1 year ago

                The actual executables shouldn’t ever go in that folder though.

                Typically packages installed through a package manager stick everything in their own folder in /usr/lib (for libs) and /usr/share (for any other data). Then they either put their executables directly in /usr/bin or symlink over to them.

                That last part is usually what results in things not living in a consistent place. A package might have something that qualifies as both an executable and a lib, so they store it in their lib folder, but symlink to it from bin. Or they might not have a lib folder, and just put everything in their share folder and symlink to it from bin.

        • Ramin Honary@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          They do! /bin has the executables, and /usr/share has everything else.

          Apps and executables are similar but separate things. An app is concept used in GUI desktop environments. They are a user-friendly front end to one or more executable in /usr/bin that is presented by the desktop environment (or app launcher) as a single thing. On Linux these apps are usually defined in a .desktop file. The apps installed by the Linux distribution’s package manager are typically in /usr/share/applications, and each one points to one of the executables in /usr/bin or /usr/libexec. You could even have two different “apps” launch a single executable, but each one using different CLI arguments to give the appearance of different apps.

          The desktop environment you use might be reconfigured to display apps from multiple sources. You might also install apps from FlatHub, Lutris, Nix, Guix, or any of several other package managers. This is analogous to how in the CLI you need to set the “PATH” environment variable. If everything is configured properly (and that is not always the case), your desktop environment will show apps from all of these sources collected in the app launcher. Sometimes you have the same app installed by multiple sources, and you might wonder “why does Gnome shell show me 'OpenTTD` twice?”

          There is no easy solution, no one agreed-upon algorithm to keep things easy for end users who install apps from multiple other sources besides the default app store. Windows, Mac OS, and Android all have the same problem. But I have always felt that Linux (especially Guix OS) has the best way of solving this problem things.

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Because dependencies. You also should not be installing things you download of the internet nor should you use install scripts.

      The way you install software is your distros package manager or flatpak

    • shadowintheday2@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      you install program A, it needs and installs libpotato then later you install program B that depends on libfries, and libfries depends on libpotato, however since you already have libpotato installed, only program B and libfries are installed The intelligence behind this is called a package manager

      In windows when you install something, it usually installs itself as a standalone thing and complains/reaks when dependencies are not met - e.g having to install Visual C++ 2005-202x for games, JRE for java programs etc

      instead of making you install everything that you need to run something complex, the package manager does this for you and keep tracks of where files are

      and each package manager/distribution has an idea of where some files be stored

    • bloodfart@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      different strokes.

      windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.

      if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program “should” be installed to already.

      but it didn’t used to be like that in the PC world! used to be your computer wasn’t a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren’t disks inside the machine, but big beige external disks that you’d plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn’t just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they’d even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.

      linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it’s where the os will look when you try to run something to see if it can run any program by that name. if a program isn’t installed in $PATH then when you type its’ name in and hit enter the computer won’t know what the hell youre talking about and you’ll have to type it’s whole ass location out and hit enter.

      Why didn’t unix systems that linux imitates ask you where to install stuff? because usually it wasn’t your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.

      so the assumption was that you’d have a variable in your user environment that would say where things were installed but not that you’d have the ability to change it or even install things.

      so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something’s installed at all?

      even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.

      so why the hell won’t linux systems give you the option of installing in a specific location or outside of $PATH altogether?

      they will, but unlike windows, they don’t ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.

    • Captain Aggravated@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      As in, the directory in which much of the operating system’s executable binaries are contained in?

      They’ll be spread between /bin and /sbin, which might be symlinks to /usr/bin and /usr/sbin. Bonus points is /boot.

      • KISSmyOSFeddit@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        A weird catch-all folder for “most important Windows system stuff”. It’s not 32bit, just named like that in typical Windows fashion for backwards compatibility.

    • Julian@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      /bin, since that will include any basic programs (bash, ls, cd, etc.).

    • ogeist@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      1 year ago

      For the memes:

      sudo rm -rf /*

      This deletes everything and is the most popular linux meme

      The same “expected” functionality:

      sudo rm -rf /bin/*

      This deletes the main binaries. You kinda can recover here but I have never done it.

    • SmashFaster@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      There is no direct equivalent, system32 is just a collection of libraries, exes, and confs.

      Some of what others have said is accurate, but to explain a bit further:

      Longer explanation:

      spoiler

      system32 is just some folder name the MS engineers came up back in the day.

      Linux on the other hand has many distros, many different contributors, and generally just encourages a … better … separation for types of files, imho

      The linux filesystem is well defined if you are inclined to research more about it.
      Understanding the core principals will make understanding virtually everything else about “linux” easier, imho.

      https://tldp.org/LDP/intro-linux/html/sect_03_01.html

      tl;dr; “On a UNIX system, everything is a file; if something is not a file, it is a process.”

      The basics:

      • /bin - base level executables, ls, mv, things like that
      • /sbin - super-level-only (root) executables, parted, reboot, etc
      • /lib - Somewhat self-explanatory, holds libraries, lots of things put their libs here, including linux kernel modules, /lib/modules/*, similar to system32’s function of holding critical libraries
      • /etc - Configuration lives here, generally speaking, /etc/<application name> can point you in the right direction, typically requires super-user (root) to edit
      • /usr - “User installed” software, which can be a murky definition in today’s world, but lots of stuff ends up here for installed software, manuals, icon files, executables

      Bonus:

      • /opt - A special location, generally third-party, bundled-style software likes to use this, Java for instance, but historically some admins use it as the “company location”, meaning internally developed software would live there.
      • /srv - Largely subjective, but myself and others I know use it for partitions that are outside the primary disk, for instance we use /srv/db for database volumes, /srv/www for web-data volumes, /srv/Media for large-file storage, etc, etc

      For completeness:

      • /home - You’ll find your user directories here, personally, this is my directory I backup, I don’t carry much more with me on most systems.
      • /var - “Variable data”, basically meaning any data that will likely grow over time, eg: /var/log
    • NaN@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Don’t think there is.

      system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.

      The bash in /bin depends on libraries in /lib for example.

  • shaytan@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Is explicit sync a good enough solution to make wayland gaming with nvidia a reality(+ remove window flickering like some people claim it will)? It’s the last obstacle I find now trying to move my main pc to linux, and I don’t really want to use x11.

    Pd. Lesson learned, next time I’ll get an AMD gpu.

  • jaybone@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    Question about moving from Ubuntu to Debian - Package updates and security updates…

    On Ubuntu, I seem to get notifications almost every week about new package updates. (Through the apt UI)

    On Debian, I don’t see this.

    I can run apt update and apt upgrade

    On Ubuntu, I see this pull a bunch of package data from various package repo URLs.

    On Debian, I only see this pulling package data from two or three repo URLs at debian.org

    Mainly I am concerned about security updates and bug fixes. Do I need to manually add other repo sources to the apt config files? Or does debian update those repos regularly?

  • MojoMcJojo@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I want to turn a Microsoft surface go 2 into a kali linux machine. I would appreciate any guidance pulling this off. I want use it for learning it security stuff, partly for work but mostly for curiosity. Occasionally I run across malware, trojans, and I want to look under the hood to see how they work. I’m assuming Kali is the best tool for the job and that Lemmy is the place to go for tooling around with tools.