Title is TLDR. More info about what I’m trying to do below.

My daily driver computer is Laptop with an SSD. No possibility to expand.

So for storage of lots n lots of files, I have an old, low resource Desktop with a bunch of HDDs plugged in (mostly via USB).

I can access Desktop files via SSH/SFTP on the LAN. But it can be quite slow.

And sometimes (not too often; this isn’t a main requirement) I take Laptop to use elsewhere. I do not plan to make Desktop available outside the network so I need to have a copy of required files on Laptop.

Therefor, sometimes I like to move the remote files from Desktop to Laptop to work on them. To make a sort of local cache. This could be individual files or directory trees.

But then I have a mess of duplication. Sometimes I forget to put the files back.

Seems like Laptop could be a lot more clever than I am and help with this. Like could it always fetch a remote file which is being edited and save it locally?

Is there any way to have Laptop fetch files, information about file trees, etc, located on Desktop when needed and smartly put them back after editing?

Or even keep some stuff around. Like lists of files, attributes, thumbnails etc. Even browsing the directory tree on Desktop can be slow sometimes.

I am not sure what this would be called.

Ideas and tools I am already comfortable with:

  • rsync is the most obvious foundation to work from but I am not sure exactly what would be the best configuration and how to manage it.

  • luckybackup is my favorite rsync GUI front end; it lets you save profiles, jobs etc which is sweet

  • freeFileSync is another GUI front end I’ve used but I am preferring lucky/rsync these days

  • I don’t think git is a viable solution here because there are already git directories included, there are many non-text files, and some of the directory trees are so large that they would cause git to choke looking at all the files.

  • syncthing might work. I’ve been having issues with it lately but I may have gotten these ironed out.

Something a little more transparent than the above would be cool but I am not sure if that exists?

Any help appreciated even just idea on what to web search for because I am stumped even on that.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Is the desktop using a wifi card? You could plug it into the router to shorten the journey and halve the number of wireless hops.

  • bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Hey I’m replying again directly to your post in the hopes that I can push against some of the advice you’re getting. My intent is to do an end run around arguing with the people making these suggestions because they’re very smart and made them for good reasons but their ideas aren’t necessarily good for you and I don’t want you to have to go through a troublesome recovery like I did and many people on the internet have.

    Do not under any circumstances set up raid or zpools for your data drives once you get them inside a case and on the pcie bus somehow.

    In these configurations accessing a file requires spinning up all the drives in the array or pool. Not only is that putting wear and tear on your drives, it increases the temperature of the case and draws much more power. Those conditions lead to drive failure. When your drive fails and you have a spare to use in its place, resilvering (the process of using extra data called parity to rebuild the contents of the failed drive on the spare one) will put those exact conditions on your remaining drives.

    For people like us, who may not have a hot spare, or great cooling, or an offsite backup, an array like that will set us up for failure rather than resilience.

    Please consider using mergerfs or something like it and a snapshot parity system like snapraid instead.

    There are very good use cases for the raid and zpool systems that have been brought up, but you aren’t there. I got there at moderate expense and moved away from them.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      thanks I appreciate it. I’ve been around the block enough times to expect maximalist advice in places like this. people who are motivated to be hanging around in a forum just waiting for someone to ask a question about hard drives are coming from a certain perspective. Honestly, it’s not my perspective. But the information is helpful in totality even though I’m unlikely to end up doing what any one person suggests.

      RAID is something I’ve seen mentioned over and over again. Every year or two I go reading about them more intentionally and never get the impression it’s for me. Too elaborate to solve problems I don’t have.

  • wargreymon2023@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 months ago
    1. wifi is less responsive than ethernet
    2. You don’t even need encryption on local connection(no need for ssh) if it is never global.
    3. if you have that many external HDD, I would rather get a NAS and call it a day, it has everything configured for you as well.
    4. I don’t know what big data you access on, get a larger SSD to store whatever you might visit often from your HDDs.
    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Strong disagreement on №2. That kinda thinking is how you get devices on your home network to join a malicious botnet without your knowledge and more identity theft. ALL network communications should assume that a malicious actor may be present and use encryption-in-transit for anything remotely approaching private, identifiable, or sensitive.

  • SethranKada@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Have you tried setting up WebDAV? From what I know it has local cache support. I use it to access the files on my NAS remotely. Of course, I could be wrong, and my NAS came with it preinstalled so I’m not actually sure how to set it up manually.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      I’ve used WebDAV here and there. I found some aspects of set up frustrating so I tend to keep away from it except for smaller, short term use cases.

      Does it do the caching thing or is it more of an alternative to SSH/SFTP?

      If it’s an alternative, what is the benefit?

      IIRC WebDAV can be set up from inside certain filemanagers (like nautilus with an extension installed) or by using a web server like apache, or by using smaller stand alone services.

      • dutchkimble@lemy.lol
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        They mean to keep the files only on your desktop, keep it always on, and use VPN from your laptop to your desktop to access whatever files you want at that time from the desktop directly.

      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Nevermind. I read your question way wrong. You have a local network and want better access to things there.

        Different filesystems and network service which exposes. SMB will be the worst performance, NFS possibly the best. You’re also going to have lag on however you’re accessing these files from the client, so maybe there’s an issue.

  • mbirth@lemmy.mbirth.uk
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    What you suggest sounds a lot like the “Briefcase” that was in Windows 9x. I don’t know of something similar, especially not something integrated into Linux.

    The easiest way might be to setup SyncThing to share all of your different folders and then subscribe to those you need on your laptop. Just be aware that if you delete a file on your laptop it will also be deleted on your desktop on the next sync. Unsubscribe from the folder first before freeing up the disk space.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      if you delete a file on your laptop it will also be deleted on your desktop on the next sync

      This is my fear! I have done it before… Forgetting something is synced and deleting what I thought was “an extra copy” only to realize later that it propagated to the original.

      • ScreaminOctopus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        If you’re on Linux I’d recommend using btrfs, or bcachefs with snapshots. It’s basically like time machine on MacOS. That way if you accidentally delete something you can still recover it.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Easiest for this might be NextCloud. Import all the files into it, then you can get the NextCloud client to download or cache the files you plan on needing with you.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      hmm interesting idea. I do not get the idea that nextcloud is reliably “easy” as it’s kind of a joke how complex it can be.

      Someone else suggested WebDAV which I believe is the filesharing Nextcloud uses. Does Nextcloud add anything relevant above what’s available from just WebDAV?

      • Max-P@lemmy.max-p.me
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        I’d say mostly because the client is fairly good and works about the way people expect it to work.

        It sounds very much like a DropBox/Google Drive kind of use case and from a user perspective it does exactly that, and it’s not Linux-specific either. I use mine to share my KeePass database among other things. The app is available on just about any platform as well.

        Yeah NextCloud is a joke in how complex it is, but you can hide it all away using their all in one Docker/Podman container. Still much easier than getting into bcachefs over usbip and other things I’ve seen in this thread.

        Ultimately I don’t think there are many tools that can handle caching, downloads, going offline, reconcile differences when back online, in a friendly package. I looked and there’s a page on Oracle’s website about a CacheFS but that might be enterprise only, there’s catfs in Rust but it’s alpha, and can’t work without the backing filesystem for metadata.

      • Nextcloud AIO in docker is dead simple and has been reliable for me.

        The sync client is capable of syncing the whole tree as remote pointers that didn’t take space until you access the file then it downloads it local. You can also set files to always be local.

  • bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 months ago

    You have two problems.

    Transferring between your laptop and desktop is slow. There’s a bunch of reasons that this could be. My first thought is that the desktops got a slow 100mbps nic or not enough memory. You could also be using something that’s resource intensive and slow like zfs/zpools or whatever. It’s also possible your laptops old g WiFi is the bottleneck or that with everything else running at the same time it doesn’t have the memory to hold 40tb worth of directory tree.

    Plug the laptop into the Ethernet and see if that straightens up your problems.

    You want to work with the contents of desktop while away from its physical location. Use a vpn or overlay network for this. I have a complex system so I use nebula. If you just want to get to one machine, you could get away with just regular old openvpn or wireguard.

    E: I just reread your post and the usb is likely the problem. Even over 2.0 it’s godawful. See if you can migrate some of those disks onto the sata connectors inside your desktop.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      Thanks!

      I elaborated on why I’m using USB HDDs in this comment. I have been a bit stuck knowing how to proceed to avoid these problems. I am willing to get a new desktop at some point but not sure what is needed and don’t have unlimited resources. If I buy a new device, I’ll have to live with it for a long time. I have about 6 or 8 external HDDs in total. Will probably eventually consolidate the smaller ones into a larger drive which would bring it down. Several are 2-4TB, could replace with 1x 12TB. But I will probably keep using the existing ones for backup if at all possible.

      Re the VPN, people keep mentioning this. I am not understanding what it would do though? I mostly need to access my files from within the LAN. Certainly not enough to justify the security risk of a dummy like me running a public service. I’d rather just copy files to an encrypted disk for those occasions and feel safe with my ports closed to outsiders.

      Is there some reason to consider a VPN for inside the LAN?

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    A few ideas/hints: If you are up for some upgrading/restructuring of storage, you could consider a distributed filesystem: https://wikiless.org/wiki/Comparison_of_distributed_file_systems?lang=en.

    Also check fuse filesystems for weird solutions: https://wikiless.org/wiki/Filesystem_in_Userspace?lang=en

    Alternatively perhaps share usb drives from ‘desktop’ over ip (https://www.linux.org/threads/usb-over-ip-on-linux-setup-installation-and-usage.47701/), and then use bcachefs with local disk as cache and usb-over-ip as source. https://bcachefs.org/

    If you decide to expose your ‘desktop’, then you could also log in remote and just work with the files directly on ‘desktop’. This oc depends on usage pattern of the files.

  • Corgana@startrek.website
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    I have a very similar setup to you, and I use SyncThing without issue for the important files (which I keep in my Documents directory to make it easy to remember).

  • MNByChoice@midwest.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    NFS and ZeroTier would likely work.

    When at home NFS will be similar to a local drive, though a but slower. Faster than SSHFS. NFS is often used to expand limited local space.

    I expect a cache layer on NFS is simple enough, but that is outside my experience.

    The issue with syncing, is usually needing to sync everything.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      What would be the role of Zerotier? It seems like some sort of VPN-type application. I don’t understand what it’s needed for though. Someone else also suggested it albeit in a different configuration.

      Just doing some reading on NFS, it certainly seems promising. Naturally ArchWiki has a fairly clear instruction document. But I am having a ahrd time seeing what it is exactly? Why is it faster than SSHFS?

      Using the Cache with NFS > Cache Limitations with NFS:

      Opening a file from a shared file system for direct I/O automatically bypasses the cache. This is because this type of access must be direct to the server.

      Which raises the question what is “direct I/O” and is it something I use? This page calls direct I/O “an alternative caching policy” and the limited amount I can understand elsewhere leads me to infer I don’t need to worry about this. Does anyone know otherwise?

      The issue with syncing, is usually needing to sync everything.

      yes this is why syncthing proved difficult when I last tried it for this purpose.

      Beyond the actual files ti would be really handy if some lower-level stuff could be cache/synced between devices. Like thumbnails and other metadata. To my mind, remotely perusing Desktop filesystem from Laptop should be just as fast as looking through local files. I wouldn’t mind having a reasonable chunk of local storage dedicated to keeping this available.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        If there is sufficient RAM on the laptop, Linux will cache a lot of metadata in other cache layers without NFS-Cache.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        NFS-Cache is a specific cache for NFS, and does not represent all caching that can be done of files over NFS. “Direct I/O” is also a specific thing, and should not be generalized in the meanings of “direct” and “I/O”.

        Let’s skip those entirely for now as I cannot simply explain either. I doubt either will matter in your use case, but look back if performance lags.

        One laptop accessing one NFS share will have good performance on a quite local network.

        NFS is an old protocol that is robust and used frequently. NFSv3 is not encrypted. NFSv4 has support for encryption. (ZeroTier can handle the encryption.)

        SSHFS is a pseudo file system layered over SSH. SSH handles encryption. SSHFS is maybe 15 years old and is aimed at convenience. SSH is largely aimed at moving streams of text between two points securely. Maybe it is faster now than it was.

      • ScreaminOctopus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        NFS is generally the way network storage appliances are accessed on Linux. If you’re using a computer you know you’re going to be accessing files on in the long term it’s generally the way to go since it’s a simple, robust, high performance protocol that’s used by pros and amateurs alike. SSHFS is an abuse of the ssh protocol that allows you to mount a directory on any computer you can get an ssh connection to. You can think of it like VSCode remote editing, but it’ll work with any editor or other program.

        You should be able to set up NFS with write caching, etc that will allow it to be more similar in performance to a local filesystem. Note that you may not want write caching specifically if you’re going to suddenly disconnect your laptop from the network without unmounting the share first. Your actual performance might not be the same, especially for large transfers, due to the throughput of your network and connection quality. In my general experience sshfs is kind of slow especially when accessing many different small files, and NFS is usually much faster.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        ZeroTier allows for a mobile, LAN-like experience. If the laptop is at a café, the files can be accessed as if at home, within network performance limits.

  • Gravitywell@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    Can you upgrade the desktop? What speed is your laptops WiFi?

    I just use rsync manually until I have syncthig working but these don’t really solve slowdown issues and aren’t mounted. I would look into a better NIC and/or storage for the desktop or possibly your router.

    Try using something like iperf to measure raw speed of the connection between your 2 systems, see if its what it should be (around 300-600mbps for wireless to wired locally) and try to narrow down where the bottleneck is.

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      A few weeks ago I put some serious time/brainpower into the network and got it waaaay smoother and faster than before. Finally implemented some upgraded hardware that has been sitting on a shelf for too long.

      I tried iperf. Actually iperf3 because that’s the first tutorial I found. Do you have any opinion on iPerf vs iperf3? On Desktop I ran:

      iperf3 -s -p 7673
      

      On Laptop I am currently doing some stuff I didn’t want to quit so this may not be a totally fair test. I’ll try re running it later. That said I ran:

       iperf3 -c desktop.lan -p 7673 -bidir
      

      And what looks like a summary at the bottom:

      [ ID] Interval           Transfer     Bitrate         Retr
      [  5]   0.00-10.00  sec   102 MBytes  86.0 Mbits/sec  152             sender
      [  5]   0.00-10.00  sec   102 MBytes  85.6 Mbits/sec                  receiver
      

      I actually have AnotherDesktop on the LAN also connected via ethernet. Going from Laptop —> AnotherDesktop gets similar to the above.

      However going AnotherDesktop —> Desktop gets 10x better results:

      [  5]   0.00-10.00  sec  1.09 GBytes   936 Mbits/sec    0             sender
      [  5]   0.00-10.00  sec  1.09 GBytes   933 Mbits/sec                  receiver
      

      Laptop has Intel Dual Band Wireless-AC 8260 who’s Max Speed = 867 Mbps. It probably isn’t the bottleneck. Although with the distro running at the moment (Fedora) I have a LOT of problems with everything so possibly things aren’t set up ideally here.

      I still didn’t upgrade the actual wireless access point for the network; don’t recall what the max speed is for current WAP but could be around 100Mbps.

      So this is an interesting path to optimize. However I am still interested in solving the original problem because even when I am directly using Desktop, things are slow. I do not really want to upgrade it is I can get away with a software solution. There are many items on my list of projects and purchases that I’d rather concentrate on.

  • bartlbee@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    zerotier + rclone sftp/scp mount w/ vfs cache? I haven’t tried using vfs cache with anything other than a cloud mount but it may be worth looking at. rclone mounts work just as well as sshfs; zerotier eliminates network issues

    • linuxPIPEpower@discuss.tchncs.deOP
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      What would be the role of Zerotier? It seems like some sort of VPN-type application. What do I need that for?

      rclone is cool and I used it before. I was never able to get it to work really consistently so always gave up. But that’s probably use error.

      That said, I can mount network drives and access them from within the file system. I think GVFS is doing the lifting for that. There are a couple different ways I’ve tried including with rclone, none seemed superior performance-wise. I should say the Desktop computer is just old and slow; there is only so much improvement possible if the files reside there. I would much prefer to work on my Laptop directly and move them back to Desktop for safe keeping when done.

      “vfs cache” is certainly an intriguing term

      Looks like maybe the main documentation is rclone mount > vfs-file-caching and specifically --vfs-cache-mode-full

      In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.

      So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.

      I’m not totally sure what this would be doing, if it is exactly what I want, or close enough? I am remembering now one reason I didn’t stick with rclone which is I find the documentation difficult to understand. This is a really useful lead though.

      • bartlbee@lemmy.sdf.org
        link
        fedilink
        arrow-up
        0
        ·
        7 months ago

        Zerotier + sshfs is something I use consistently in similar situations to yours - and yes, zerotier is similar to a vpn. Using it for a constant network connection makes it less critical to have everything mirrored locally. . . But I guess this doesn’t solve your speed issue.

        I"m not an expert in rclone. I use it for connecting to various cloud drives and have occasionally used it for an alternative to sshfs. I"ve used vfs-cache for cloud syncs but not quite in the manner you are trying. I do see there is a vfs cache read-ahead option that might he|p? Agreed on the documentation, sometimes their forum helps.