SpaceCadet

joined 1 year ago
[–] SpaceCadet@feddit.nl 3 points 4 weeks ago

Due to the energy crisis in Europe at the beginning of the Russian invasion of Ukraine, as a cost saving measure some cities here in Belgium decided to turn off the street lights at a certain time. I think they went dark at 23:00 or 22:00, so your Cinderella Lighting scenario.

I thought it felt quite peaceful to have some true darkness, and wouldn't mind it back, but at the same time if you had to walk outside at that time, it could feel a bit unsettling even if I live in a very safe neighborhood. I also found that there were some practical issues like, not being able to see obstacles or the state of the pavement, so you had to tread carefully. I'd definitely buy a decent flashlight if they implement that again.

Later, I suppose after complaints from citizens, they reverted to turning only every other streetlight off. I didn't like that at all, it was the worst of both worlds. There were still patches where you couldn't see properly, but none of the peaceful feeling of true darkness. Since a year or so it's back to all streetlights all night.

[–] SpaceCadet@feddit.nl 4 points 1 month ago

For example, the octa-core Ryzen 7 9700X is much more efficient than the 7700X

This has been proven untrue by several reputable reviewers, like Gamers Nexus.

[–] SpaceCadet@feddit.nl 4 points 1 month ago

Hell yeah! Who needs yesterday's data when today's data is so much better. Preach!

[–] SpaceCadet@feddit.nl 5 points 1 month ago (3 children)

You mean https://archive.archlinux.org/. I ain't keeping no stinking obsolete packages around.

[–] SpaceCadet@feddit.nl 1 points 2 months ago

It does sound like the graphics side may be the culprit somehow. Not necessarily the hardware being broken, but the R9 is fairly old and perhaps support got broken somewhere along the way, and nobody ever noticed it?

Are you using the radeon or amdgpu driver btw? There is a section on the Arch wiki that talks about into instability with the radeon driver specifically with the R9 390: https://wiki.archlinux.org/title/AMDGPU#R9_390_series_poor_performance_and/or_instability

IIRC Plasma 6/QT6 does use vulkan heavily instead of OpenGL. Some additional things you could try:

  • Disable kwin compositing
  • Try to reproduce in another desktop environment
  • Swap in a more recent GPU
[–] SpaceCadet@feddit.nl 1 points 2 months ago (2 children)

Sounds like it could be something like hardware video decoding messing up the state of your GPU, which then crashes plasma.

You could try to switch your video player to software decoding, and see if that makes the issue go away. It's less efficient but a 3700x should be able to handle any video you throw at it.

[–] SpaceCadet@feddit.nl 5 points 2 months ago

For me the current State of Text Rendering is that I don't have to think about text rendering anymore. And that is awesome.

I remember the dark days of having to patch freetype and cairo with infinality patches and the endless tweaking. Nowadays you get good (enough) font rendering out of the box, and it's rare that you have to tweak something.

[–] SpaceCadet@feddit.nl 2 points 2 months ago

If he’s processing LLMs or really any non-trivial DB (read: any business DB)

Actually... as a former DBA on large databases, you typically want to minimize swapping on a dedicated database system. Most database engines do a much better job at keeping useful data in memory than the Linux kernel's file caching, which is agnostic about what your files contain. There are some exceptions, like elasticsearch which almost entirely relies on the Linux filesystem cache for buffering I/O.

Anyway, database engines have query optimizers to determine the optimal path to resolve a query, but they rely on it that the buffers that they consider to be "in memory" are actually residing in physical memory, and not sitting in a swapfile somewhere.

So typically, on a large database system the vendor recommendation will be to set vm.swappiness=0 to minimize memory pressure from filesystem caching, and to set the database buffers as high as the amount of memory you have in your system minus a small amount for the operating system.

[–] SpaceCadet@feddit.nl 12 points 2 months ago

I’ve never understood why GNU/Linux actually needs swap

It doesn't. It's just good to have in most circumstances.

Also, sidenote: "GNU" doesn't apply here. Swapping is purely kernel business, no GNU involvement here.

Okay, I created a 4G partition for it, having 32G of RAM. I never used all that RAM, but even so, stuff regularly ends up in swap. Why does the OS waste write cycles on my SSD if it doesn’t have to?

Physical memory does not just contain program data, it also contains the filesystem cache, which is also important for performance and responsiveness. The idea is that some of the least recently used memory pages are sometimes evicted to swap in favor of more file caching.

You can tweak this behavior by setting the vm.swappiness kernel parameter with sysctl. Basically higher values mean higher preference for keeping file backed pages in memory, lower values mean higher preference for keeping regular memory pages in memory.

By default vm.swappiness = 60. If you have an abundance of memory, like a desktop system with 32G, it can be advantageous to lower the value of this parameter. If you set it to something low like 10 or 1, you will rarely see any of this paradoxical swap usage, but the system will still swap if absolutely necessary. I remember reading somewhere that it's not a good idea to set it to 0, but I don't remember the reason for that.

Alternatively, there is no rule that says you can't disable swap entirely. I've run a 32G desktop system without any swap for years. The downside is that if your 32G does run out, there will be no warning signs and the OOM killer will unceremoniously kill whatever is using the most memory.

tl;dr just do this:

sysctl vm.swappiness=10
echo "vm.swappiness=10" > /etc/sysctl.d/99-swappiness.conf
[–] SpaceCadet@feddit.nl 3 points 2 months ago

I run a lot of VMs; I typically run 2 at the same time in addition to running other programs in the background, my usecase is more eccentric than most users in the Linux space which is already pretty niche

If what you're doing involves using close to all of your system memory, it does make sense to add swap. So your use case is a good example actually.

I also have an old Arch PC that I use to run various VMs on (currently 6 VMs in use). It does have a swapfile, but the most swap I've ever seen in use is about 1GB.

I’m using BTRFS with LUKS-based Full Disk Encryption, the last time I used swapfiles with BTRFS with FDE it was in 2019 and it was painful to say the least, I rememeber spending several weeks scouring Stack and the Arch forums in order to get it to work properly.

Weird. Sounds like you may have painted yourself a bit into a corner by using BTRFS then. I use trusty old ext4 on top of LUKS FDE, no issues with swapfiles whatsoever.

That brings me to another downside of swap partitions: encryption. You can leak sensitive data through your swap partition, so it should be encrypted. If you use a plain partition, without LUKS in between, information in your swap is exposed. So you need to do more configuration to setup LUKS on your swap partition.

If you use a swapfile on an already encrypted filesystem though, you don't have to worry about it.

when would I even want to resize swap for a single system

Maybe your requirements change (e.g. "I want to be able to hibernate"), maybe your memory configuration changes, maybe you've underestimated or overestimated how much swap you need.

Case in point: the Arch PC I mentioned above only uses upto 1GB of swap, but it has a 16GB swapfile. This discussion has brought to my attention that perhaps I should downsize the swapfile a bit and free up disk space.

you don’t really want to depend on swap if can

That is my position too. It's always better to have a properly sized system, or limit what you push on an existing system. High swap usage rarely results in a good experience.

[–] SpaceCadet@feddit.nl 6 points 2 months ago (2 children)

0 swap: which was pretty awful with constant unexpected system freezes/crashes

I've run Arch without swap for many years without issues. The key of course is that you need enough RAM for what you are trying to do with your computer.

There's no reason why a 32GB RAM + 0GB swap system should have more problems than a 16GB RAM + 16GB swap system with the same workload. If anything, the former is going to run much better.

swap file: finicky but doable

What is finicky about a swap file?

It's just this:

mkswap -U clear --size 4G --file /swapfile
swapon /swapfile

Done

If anything it's way easier to create a file in your filesystem than having to (re-)partition your drive to have a swap partition. Much more flexible too if you want to change your swap configuration in the future.

[–] SpaceCadet@feddit.nl 1 points 3 months ago

Hmm, I can't say that I've ever noticed this. I have a 3950x 16-core CPU and I often do video re-encoding with ffmpeg on all cores, and occasionally compile software on all cores too. I don't notice it in the GUI's responsiveness at all.

Are you absolutely sure it's not I/O related? A compile is usually doing a lot of random IO as well. What kind of drive are you running this on? Is it the same drive as your home directory is on?

Way back when I still had a much weaker 4-core CPU I had issues with window and mouse lagging when running certain heavy jobs as well, and it turned out that using ionice helped me a lot more than using nice.

I also remember that fairly recently there was a KDE/plasma stutter bug due to it reading from ~/.cache constantly. Brodie Robertson talked about it: https://www.youtube.com/watch?v=sCoioLCT5_o

6
submitted 9 months ago* (last edited 9 months ago) by SpaceCadet@feddit.nl to c/debian@lemmy.ml
 

I have a small server in my closet which is running 4 Debian 12 virtual machines under kvm/libvirt. The virtual machines have been running fine for months. They have unattended-upgrades enabled, and I generally leave them alone. I only reboot them periodically, so that the latest kernel upgrades get applied.

All the machines have an LVM configuration. Generally it's a debian-vg volume group on /dev/vda for the operating system, which has been configured automatically by the installer, and a vgdata volume group on /dev/vdb for everything else. All file systems are simple ext4, so nothing fancy. (*)

A couple of days ago, one of the virtual machines didn't come up after a routine reboot and dumped me into a maintenance shell. It complained that it couldn't mount filesystems that were on vgdata. First I tried simply rebooting the machine, but it kept dumping me into maintenance. Investigating a bit deeper, I noticed that vgdata and the block device /dev/vdb were detected but the volume group was inactive, and none of the logical volumes were found. I ran vgchange -a y vgdata and that brought it back online. After several test reboots, the problem didn't reoccur, so it seemed to be fixed permanently.

I was willing to write it off as a glitch, but then a day later I rebooted one of the other virtual machines, and it also dumped me into maintenance with the same error on its vgdata. Again, running vgchange -y vgdata fixed the problem. I think two times in two days the same error with different virtual machines is not a coincidence, so something is going on here, but I can't figure out what.

I looked at the host logs, but I didn't find anything suspicious that could indicate a hardware error for example. I should also mention that the virtual disks of both machines live on entirely different physical disks: VM1 is on an HDD and VM2 on an SSD.

I also checked if these VMs had been running kernel 6.1.64-1 with the recent ext4 corruption bug at any point, but this does not appear to be the case.

Below is an excerpt of the systemd journal on the failed boot of the second VM, with what I think are the relevant parts. Full pastebin of the log can be found here.

Dec 16 14:40:35 omega lvm[307]: PV /dev/vdb online, VG vgdata is complete.
Dec 16 14:40:35 omega lvm[307]: VG vgdata finished
...
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvbinaries.device - /dev/vgdata/lvbinaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for binaries.mount - /binaries.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for local-fs.target - Local File Systems.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
Dec 16 14:42:05 omega systemd[1]: binaries.mount: Job binaries.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvbinaries.device: Job dev-vgdata-lvbinaries.device/start failed with result 'timeout'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start timed out.
Dec 16 14:42:05 omega systemd[1]: Timed out waiting for device dev-vgdata-lvdata.device - /dev/vgdata/lvdata.
Dec 16 14:42:05 omega systemd[1]: Dependency failed for data.mount - /data.
Dec 16 14:42:05 omega systemd[1]: data.mount: Job data.mount/start failed with result 'dependency'.
Dec 16 14:42:05 omega systemd[1]: dev-vgdata-lvdata.device: Job dev-vgdata-lvdata.device/start failed with result 'timeout'.

(*) For reference, the disk layout on the affected machine is as follows:

# lsblk 
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda                   254:0    0   20G  0 disk 
├─vda1                254:1    0  487M  0 part /boot
├─vda2                254:2    0    1K  0 part 
└─vda5                254:5    0 19.5G  0 part 
  ├─debian--vg-root   253:2    0 18.6G  0 lvm  /
  └─debian--vg-swap_1 253:3    0  980M  0 lvm  [SWAP]
vdb                   254:16   0   50G  0 disk 
├─vgdata-lvbinaries   253:0    0   20G  0 lvm  /binaries
└─vgdata-lvdata       253:1    0   30G  0 lvm  /data

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  debian-vg   1   2   0 wz--n- <19.52g    0 
  vgdata      1   2   0 wz--n- <50.00g    0 

# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/vda5  debian-vg lvm2 a--  <19.52g    0 
  /dev/vdb   vgdata    lvm2 a--  <50.00g    0 

# lvs
  LV         VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root       debian-vg -wi-ao----  18.56g                                                    
  swap_1     debian-vg -wi-ao---- 980.00m                                                    
  lvbinaries vgdata    -wi-ao----  20.00g                                                    
  lvdata     vgdata    -wi-ao---- <30.00g 
view more: next ›