

Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
Well no. Initially i had the storage set on the VM where its running. I wasn’t expecting it to download all that data.
OK, so maybe I didn’t explain myself. What I meant was that I would like resilience so that if one server goes down, I’ve got the other to quickly fireup. Only problem is that slave sever has a smaller pool, so I can’t replicate the whole pool of master server.
Well, its ticked but not working then because I found duplicate links. Maybe it only works if you try to store the same link twice but it doesn’t work on the imported bookmarks
I was using floccus, but what is the point of saving bookmarks twice, once in linkwarden and once in browser
absolutely, none of that is going past my router.
Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don’t like working off NFS share so we’ll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they’ll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.
The first rule of containers is that you do not store any data in containers.
Do you mean they should be bind mounts? From here, a bind mount should look like this:
version: ‘3.8’
services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container
So referring to my Firefly compose above, then I shoudl simply be able to copy over the /var/www/html/storage/upload
for the main app data and the database stored in here /var/lib/mysql
can just be copied over? but then why does my local folder not have any strorage/upload
folders?
user@vm101:/var/www/html$ ls index.html
Here is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.
The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID".
# You can generate the Client ID at http://localhost/profile (after registering)
# The Firefly III URL is: http://app:8080/
#
# Other URL's will give 500 | Server Error
#
services:
app:
image: fireflyiii/core:latest
hostname: app
container_name: firefly_iii_core
networks:
- firefly_iii
restart: always
volumes:
- firefly_iii_upload:/var/www/html/storage/upload
env_file: .env
ports:
- '84:8080'
depends_on:
- db
db:
image: mariadb:lts
hostname: db
container_name: firefly_iii_db
networks:
- firefly_iii
restart: always
env_file: .db.env
volumes:
- firefly_iii_db:/var/lib/mysql
importer:
image: fireflyiii/data-importer:latest
hostname: importer
restart: always
container_name: firefly_iii_importer
networks:
- firefly_iii
ports:
- '81:8080'
depends_on:
- app
env_file: .importer.env
cron:
#
# To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below
# The STATIC_CRON_TOKEN must be *exactly* 32 characters long
#
image: alpine
container_name: firefly_iii_cron
restart: always
command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout"
networks:
- firefly_iii
volumes:
firefly_iii_upload:
firefly_iii_db:
networks:
firefly_iii:
driver: bridge
Documentation is impressive. I need to take a look. Thanks for sharing.
Yeah, seems like the registration to IPinfo is required so that you can download a token which then allow pfBlockerNG to download the ASN database. I’ve just registered to IPinfo and it seems like (unless its a false alarm) that it now works.
However, I’ve also learned that all the ARUB ASNs I had didn’t include the SMPTS server I was using.
Basically, I did an nslookup smtps.aruba.it, got the IP and then did a search for the ASN using Team Cymru IP to ASN Lookup v1.0 here https://asn.cymru.com/cgi-bin/whois.cgi to find the ASN. I then copied the ASN in the WAN_EGRESS list and bingo its working.
I agree. In principle Nextcloud is a great idea and project but it has a lot of issues that make maintaining a pain. I had it for over 2 years and at every update it was painfull. I gave up and moved to Syncthing+Radicale. Is there something I miss? Yes, the ability to share as Syncthing doesn’t allow sharing.
Looks interesting but I couldn’t find anything on the android app. There isn’t any “Monica” on f-droid either.
You nave opened a new world vere. I had no idea that these no code solutions were not available. Sounds very interesting.
They are mostly cash. On average 5-10/day over a 5 hrs day.
Ah right. Docker seems to have gained more ground than LXC if its the first time I come across it. I hadn’t realised they were similar, especially after I discovered that people are running docker in LXC …
What i read here is concerning. Non that i was getting into the swing of drocker … :-( Is LXC the future then?
Ok, I hadn’t realised that the helper script installs docker. I thought LXC was an alternative to docker.
Regarding the VM option, I did think of doing just that but read a lot about it using too many resources with frigate and LXC seems to be more efficient option when it comes to resources
OK, I should have been clearer. With “community LXC repository on github” I actually meant that I used the LXC scripts. It did go through a few questions at the start but nothing relating to storage and camera setup.
That is brilliant. Thanks. I haven’t read this all yet looks like what I need.
Mate, it was a sarcastic statement 😉