If it's constantly reading the same data it's stored in cache, which is significantly faster than reading from the actual drive. Because the latency is average and cache is very fast it lowers the latency shown in that graph.
Homelab
Rules
- Be Civil.
- Post about your homelab, discussion of your homelab, questions you may have, or general discussion about transition your skill from the homelab to the workplace.
- No memes or potato images.
- We love detailed homelab builds, especially network diagrams!
- Report any posts that you feel should be brought to our attention.
- Please no shitposting or blogspam.
- No Referral Linking.
- Keep piracy discussion off of this community
15-30ms latency seems reasonable for that hardware (P420 in RAID 10 with 4x4TB 7k SAS HDDs.)
Basically your single SAS disk can do ~150IOPS random, so worst case of 4k random reads you will get ~600KBps.
For the migration if it's sequential (depending on filesystem layout), the performance can be different, up to maximum streaming performance of like 100MBs for large sequential reads.
Then 2x for your RAID config.
GID VMNAME NVDISK CMDS/s READS/s WRITES/s MBREAD/s MBWRTN/s LAT/rd LAT/wr
12953 dns - 1 0.00 0.00 0.00 0.00 0.00 0.000 0.000
16904 fw - 2 5.84 0.00 5.84 0.00 0.02 0.000 18.408
20481 vcsa - 13 16.58 0.00 16.58 0.00 0.08 0.000 37.582
130847 - 2 0.00 0.00 0.00 0.00 0.00 0.000 0.000
626694 deb - 2 12.06 0.00 12.06 0.00 0.46 0.000 6.586
as you can see the is no much IOPS per VM, like vcsa VM latency I captured is floating between 20 and 100ms, while deb has similar IOPS but lower latency
I think what you want to do is go into your db vm and run a DD or fio or bonnie++ that is at least 2 x the VM RAM, and see what the steady-state disk performance is.
I don’t have db VM, I think you are referring to deb which is short for Debian.
That is the thing, my total IOPS are less than 150, with 4 disks in RAID 10 I believe I should get 300IOPS due to mirroring. If you look at the graph, red line is transfer in kbps and blue latency, why is it dropping when disk is highly utilised? I will post htop when I’m back from work.
I had the same issue, only using a 930-8i w/ 2M cache. Honestly performance sucked on all of my VMs. I reinstalled the server with Rocky Linux 8.8 and KVM using the same array and performance was acceptable (the array was configured as a LVM volume). I then added a NVMe drive as a LVM cache and performance was much better (good enough for my homelab). Too bad, since I really prefer VMware.