I have a PC currently configured to dual boot Windows 10 and Linux Mint. I don’t need Windows anymore, but Mint is working just fine and I’d rather avoid wiping the whole thing and starting over. Is there a safe way to just get rid of Windows?

  • Labna@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    14 hours ago

    Hi, I didn’t see the answer if you only have your pc and no other big storage :
    If you still have the installation usb or recreate one. Boot on it then you open gparted with that you remove the two partition off windows, the main with the system and the recovery one (if there is) but don’t touch the first or last partition esp if it exits. Then you can expand the partitions to get the free space. Extend to the right is fast but extend to the left can be really slow and prone to failures.
    I case you Linux partition are all on the right you can also create new main partition, do the install of the linux on this one, then reboot on the USB, move the user and configuration files on the new system, delete old installation partitions, then extend the new install to take the full drive.
    There is commands to remove the old esp entries I don’t remember yet.
    This can take few hours so be patient.

    The other option with a backup (dd) of the main partition is obviously safer but take nearly the same amount of time and need an external drive.

  • data1701d (He/Him)@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    Do you have data on the Windows partition?

    Either way, a good way to do it might be to use dd (or a different disk image tool) to copy your Linux installation partitions to a portable hard drive, and make sure the image works. Then wipe the drive and copy the Linux partitions back to it via dd or another imaging tool.

  • MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    Yes. You can just straight up delete the windows partition. Windows just won’t boot anymore, even though doing only this won’t remove it from the boot menu.

    You can do this from your running linux install, but if you want to grow the linux partition to take up the free space, you’ll need to do that from a live usb.

    No changes should be necessary. Just delete the windows partition, and grow the linux partition.

    Make sure you keep the efi partition, and swap partition, if there is one.

    • verdigris@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      15 hours ago

      Note: the growing the partition from a live USB thing is only necessary if you want it all to be one partition. If it’s a separate drive, or even if it’s not, you can just format the old Windows partition/drive and mount it as a new storage volume.

    • HaraldvonBlauzahn@feddit.org
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 day ago

      I generally agree, but the best way to use the extra partition might be to keep it as a reserve to install the next Distribution release. So you go

      partition A: Ubuntu 2024.10

      Partition B: /home

      Partition C: Ubuntu 2025.04

      And swap A and C for the next upgrade. It is really nice to have a whole compatible fallback system.

        • HaraldvonBlauzahn@feddit.org
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          12 hours ago

          curious how you move all packages over

          One can copy the system using a tar backup, fix the mount pointd by changing the volume label (which identifies the mount point), and do a dist upgrade then.

          I guess that’s the best way to do it on a server. But for desktop systems, I now think it is better to make a list of manually installed packages, and to re-install the packages that are still needed from that list. This has two advantages:

          1. One gets rid of cruft and experimental installs that are no longer needed, which is really important in the long term.
          2. Some systems (I a looking at you GNOME) can break in an ugly way if doing an upgrade instead of a re-install. Very bad behaviour, but it can happen. (And this might answer the question whether Debian is more stable than Arch: Yes, as long as you don’t upgrade GNOME).

          And one more thing I do for the dot files:

          Say, my home folder is in /home/hvb . Then, I install Debian 12 and set /home/hvb/deb12 as my home folder (by editing /etc/passwd). I put my data in /home/hvb/Documents, /home/hvb/Photos/ and sym-link these folders into /home/hvb/deb12. When I upgrade, I first create a new folder /home/hvb/deb14, copy my dot files from deb12, and install a new root partition with my home set to /home/hvb/deb14. Then, I again link my data folders , documents and media such as /home/hvb/Documents into /home/hvb/deb14 . The reason I do this is that new versions of programs can upgrade the dot files to a new syntax or features, but when I switch back to boot Debian 12, the old versions can’t necessarily read the newer-version config files (the changes are mostly promised to be backward-compatible but not forward-compatible).

          All in all this is a very conservative approach but it works for me with running Debian now for about 15 years in a rather large desktop setup.

          And the above also worked well for me with distro-hopping. Though nowadays, it is more recommended to install parallel dual-booted distros on another removable disk since such installs can also modify grub and EFI setup, early graphics drivers and so on, even if in theory dual-boot installs should be completely independent… but my experience is that is not any more always guaranteed.

        • HaraldvonBlauzahn@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          13 hours ago

          Another possibly quicker way to do this is a larger BTRFS disk and create subvolumes from snapshots and mount these. When the subvolumes are no longer needed, they can be deleted like any folder.

        • HaraldvonBlauzahn@feddit.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          13 hours ago

          I already use Guix shell as a package manager on top of Debian (for programming mainly) and occasionally Arch in an VM (managed by virt-manager).

          I don’t have the impression that using NixOS or full Guix would save me time. But I will probably try Guix System on a spare disk in the next months, when I have time and energy to get a feel on it.

          • verdigris@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 hour ago

            Oh it almost certainly won’t save you time unless you already know Nix and how the ecosystem works. But it does make rolling back to previous configurations basically effortless, which seems like it would obviate your need for a full staging drive.

    • over_clox@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      2 days ago

      You sound like you know some things that perhaps I don’t know.

      Slightly different question…

      I have a 128GB SSD with Linux Mint MATE 20.3, and I did a full and successful dd backup to my 4TB backup drive.

      I have a 100GB external USB hard drive as a test medium for Mint MATE 22.1. I am happy with my test setup, and tried to dd that over to the 128GB SSD. But it wouldn’t boot.

      I restored the original 128GB SSD image and all is good right now, but why the hell didn’t the 100GB>128GB even boot?

      Edit: Secure Boot has been disabled all along, screw that headache.

      • MentalEdge@sopuli.xyz
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        2 days ago

        I’m not sure.

        AFAIK dd will create an IDENTICAL environment. This is actually not desirable as it will cause UUID conflicts where multiple partitions in a system have the same UUID.

        Unless you’re restoring something you imaged, dding one disk onto another requires fiddling with the UUIDs and fstab, to make the partitions unique again, so the kernel can tell them apart.

        • over_clox@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          2 days ago

          The goal was to migrate the 100GB to the 128GB, hopefully expand it, and format the 100GB for future temporary/experimental use.

          I never planned on having both drives actively running at the same time, so I don’t think there should have been any UUID issues, nor did I run across any errors suggesting such an issue.

          But even without expanding the partition, the dd command should have 1:1 copied the 100GB, with space to spare, and be bootable, right? Or so I thought…

          I had no problem dd restoring the original 128GB contents though, so at least I didn’t bork everything. Also the 100GB external USB is still fine. 👍

          🤷

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            2 days ago

            Is your SSD an NVME drive? It’s possible that there are non-UUID references (maybe /etc/fstab, or GRUB’s config) to the drive that are involved in the boot process.

            Maybe it is looking for /dev/sda2, which is correct on the USB disk, but now everything is on /dev/nvme0n1p2.

            Solution: Live disk, mount the root and boot partitions, look at the config files and fix the references.

            -Or-

            It could be that your boot manager has an an entry for the 128GB drive already, just pointed at the wrong .efi file.

            If you were originally on Arch for example(, btw) on the 128GB drive. During the installation of the bootloader you would have inserted an entry into the boot manager like:

            HD(1,MBR,0xe89l2937594, 0x3823, 0x2398734987)/\EFI\Linux\arch-linux.efi)
            

            But now, since you’re on Mint, arch-linux.efi isn’t there and the boot manager falls over.

            Solution: Live disk, use efibootmgr (https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface#efibootmgr), to delete the bad entry (\arch-linux.efi) and add one pointing to the correct file (\mint.efi ? grubx64.efi?).

            e: It looks like Mint uses grub, so you could also live disk -> chroot into the environment -> run grub-install (https://wiki.archlinux.org/title/GRUB#Installation) to create the entry. You will still have a ‘bad’ entry which you can delete with efibootmgr.

            • over_clox@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              2 days ago

              My 128GB is meant as an integrated NVME drive.

              Meant to be. I literally cut a service panel hole under the laptop to remove or reinstall it whenever I feel like, or maybe eventually upgrade it.

              I’ve been booting off of other devices, like my 8GB USB flash, 100GB USB HDD, and even Live Boot USB DVD drive. It’s actually been very convenient, as I can boot off of whatever the hell I want from USB.

              My backup is on a 4TB, so really no worries to me, I can more or less freely experiment around with whatever OS I want, and if it doesn’t work right, I can just dd my backup over whatever again, and it just works.

              But why doesn’t the dd 100GB>128GB work as I’d expect?

              Obviously that’s not the exact dd command I used, for privacy reasons.

              🤷

              • FauxLiving@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                2 days ago

                There’s not many things that are happening at boot: the UEFI Boot Manager points to GRUB which boots your system.

                It’s almost certainly one of them. The Boot Manager’s entries can be fixed with efibootmgr

                Most likely you’ll also have an issue after it boots because of the swap from being on /dev/sda to /dev/nvme0n1. Your home directory or swap file from the USB drive probably in the fstab like:

                /dev/sda3     /home    ext4    options   0 0
                /dev/sda4     /swap    swap    options   0 0
                

                Now /dev/sda doesn’t exist anymore, because you’re on an NVME drive. Now those directories will be at /dev/nvme0n1p3 and /dev/nvme0n1p4. You’ll have to edit fstab manually to fix this. If fstab is using UUIDs then it’ll work as-is since the partition UUIDs would have been part of the image.

                e:

                Obviously that’s not the exact dd command I used, for privacy reasons.

                Unless you did

                dd /dev/urandom /dev/nvme0n1
                

                Then you’re probably fine.

                • over_clox@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  Wait wait, I just double checked.

                  Apparently my 128GB is a SATA M2.

                  Fuck I’m still learning this new hardware. 🤦‍♂️

  • Mordikan@kbin.earth
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    2 days ago

    You can use the gparted tool to graphically remove the partition(s) and then format them to whatever file system type you are interested in and just have those mounted as extra data drives. Or merge them into your Linux partition (depending on setup). That will require gparted to be run as sudo as you are interacting with disks.

    Alternatively, you can a tool like fdisk to change partitioning in terminal. You can pull the disk info using something like lsblk, so if you had a specific drive it might be sudo fdisk /dev/nvme0n1, then you’d want to print the current table and look through the help.

    • Demonmariner@lemmy.worldOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Not a problem for me. All the software I need is either available as native Linux or runs ok under Wine.

      I’m ready to ditch Windows entirely at this point. I just need to find the best way to do that, without having to rebuild the Linux side of my dual boot PC.

        • the_q@lemmy.zip
          link
          fedilink
          arrow-up
          19
          arrow-down
          1
          ·
          2 days ago

          First of all this isn’t accurate. Second, you have no idea what op uses. Third, who simps for Windows?

            • muhyb@programming.dev
              link
              fedilink
              arrow-up
              3
              ·
              1 day ago

              Yesterday, I had to deal with Windows and I noticed that the top option on the right-click menu was Copilot. It’s getting worse and worse apparently.

              • FauxLiving@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                19 hours ago

                I had a ‘work emergency’ that turned out to be Office spamming advertisements for Copilot’s Office integration.

                They thought something was wrong with their account. Nope, Microsoft being scumbags and making advertisements look like system messages.

    • Magitian@programming.dev
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Why do you try to make Linux seem unusable? It’s not “Linux”'s fault proprietary software doesn’t work on it. Would you criticize GNOME, KDE Plasma etc. and any of the gazillion other pieces of software exclusive to Linux for not working on W*ndows to reach a wider audience next?