tvcvt

joined 1 year ago
[–] tvcvt@lemmy.ml 1 points 1 year ago

I actually missed the renaming, but I’ve seen crashes once or twice. I check it out periodically but it never will read the Zoho calendar I use (it writes new events just fine). I just end up using Thunderbird’s calendar. I really would love to have a stand-alone calendar app.

[–] tvcvt@lemmy.ml 2 points 1 year ago

My only experience with homebrew is on macOS and I’ve switched to MacPorts there. Homebrew did some weird permissions things I didn’t care for (chowned all of /usr/local to $USER, if I’m remembering right). It worked fine on a single user system, but seemed like a bad philosophy to me. This was years ago and I don’t know how it behaves on Linux.

I also prefer Firefox, but when I need a Chromium alternative for testing, I opt for the flatpak (or the snap) version personally.

[–] tvcvt@lemmy.ml 3 points 1 year ago

I’ve got one running in a Proxmox cluster. Getting it setup was a bit particular (due to the T2 chip if I remember correctly), but it’s be working flawlessly. I use the quick sync feature of the iGPU for my jellyfin container.

If you were going to buy something new, I think there are more cost effective boxes of about the same size and spec, but if you’ve got it already, you should definitely start playing with it.

[–] tvcvt@lemmy.ml 4 points 1 year ago (1 children)

Since you’re new to this and therefore probably haven’t set up too much infrastructure yet, let me put in a plug for ZFS for the file system underlying your data. That will unlock for you snapshots and the ability to send very efficient backups off site to another ZFS pool.

There are commercial offerings for all this (I think rsync.net will give you a ZFS target), but I essentially have a second NAS set up at another location for the purpose.

Beyond that, I’m also a big fan of BackBlaze B2, which can give you object-based online storage.

As far as what to back up, that’ll depend on your setup. I usually find it simplest to backup my entire VM and do recovery by restoring the VM.

[–] tvcvt@lemmy.ml 1 points 1 year ago

Good catch! I completely misread that bit.

[–] tvcvt@lemmy.ml 10 points 1 year ago (2 children)

It sounds like you’re seeing a few different issues here and it makes me wonder if there’s some hardware issue that’s causing some of this or if the installation is botched (though it’s be odd for that to hose two different distros.

Last time I looked Debian didn’t include sudo by default, so you’d have to install it first. To add yourself to the sudoers group, log in as root and run usermod -aG sudo mariah (assuming that’s your username). Then reboot (logging out your user should work too, but better be thorough).

Grub sometimes includes a timeout longer than I like and you can edit that in the /etc/default/grub file to something of your liking.

Not sure what you mean about the commands, but maybe it’s an issue with your $PATH.

[–] tvcvt@lemmy.ml 1 points 1 year ago

Right now I don’t think you need to do anything special. Unless you did something out of the ordinary, Tailscale didn’t put your computers directly on the internet. What it did was create what’s called an overlay network that allows devices connected to that network to talk to each other. It’s private and encrypted and random folks on the internet can’t get on it by default.

Do learn some networking (there are tons of great YouTube channels, books, and podcast that can help), but right now you can afford to do that slowly and not try to rush to complicate your setup before you understand it. There’s so much material out there, but I found this particularly useful to get an overview when I was first learning: https://www.grc.com/sn/sn-309.pdf.

[–] tvcvt@lemmy.ml 3 points 1 year ago

I keep my dotfiles in a got repo and just do a git pull your update them. That could definitely be a cron job if you needed.

SSH keys are a little trickier. I’d like to tell you I have a unique key for each of my desktop machines since that would be best practice, but that’s not the case. Instead I have a Syncthing shared folder. When I get around to cleaning that up, I’ll probably do just that and keep an authorize_keys and known_hosts file in git so I can pull them to needed hosts and a cron job to keep them updated.

[–] tvcvt@lemmy.ml 3 points 1 year ago

Not a symlink, but you can add source /path/to/aliases one your bashrc file to load them from another file. I do that and keep all of my dot files in a hit repo.

[–] tvcvt@lemmy.ml 2 points 1 year ago

This is a pretty routine workflow for me too. VNC works okay, but there’s some special sauce in the macOS implementation that make it much more responsive Mac to Mac. For Linux to Mac, I waffle between NoMachine and Parsec. Lately I’ve been leaning toward Parsec and it’s a pretty usable experience.

[–] tvcvt@lemmy.ml 2 points 1 year ago

How about an alternate route? If transferring information between computers is the goal, you could skip the external drive altogether and put syncthing on both machines. Then you could just share the appropriate directories between the two without the go-between.

[–] tvcvt@lemmy.ml 6 points 1 year ago

I haven’t noticed anyone else bring it up, but you mentioned in passing the possibility of using a RAID 0. I’d avoid that except in very specific circumstances. They’re potentially fine for a scratch disk type of scenario, but if any member of the array fails, the whole array is toast. The chances of a failure increases with was each disk added, so a RAID 0 is less reliable than a single disk. I definitely wouldn’t want to trust my family’s photos to it.

view more: ‹ prev next ›