Proxmox - Allow USB attached SCSI passthrough disks to sleep
Pavan Bagde
Migrating OMV bare metal to a Proxmox VM
Recently, I decided to move my OpenMediaVault (OMV) setup into a VM on my Proxmox server.
Previously, OMV was running bare-metal on an Intel NUC for years without any issues. It’s been rock solid. But I started thinking about rearranging things, possibly moving Home Assistant from a VM back to bare metal NUC, and that got me curious about trying OMV in a VM instead.
Even though both OMV and Home Assistant are kind of concrete systems that prefer bare metal, running OMV in a VM on Proxmox actually has one nice advantage of super fast file transfers between the NAS and other VMs on the same host. So I figured I’d give it a try.
The Initial Setup
Getting OMV up and running inside a VM on Proxmox was quick and painless. I connected the external disk over 10Gbps USB 3.2, then passed the individual disks through to the VM as USB devices.
This worked great at first. I could see all the SMART data in OMV, and everything felt snappy.
But then… trouble.
After just a couple of days, I noticed things going sideways when I tried to copy large amounts of data between the disks in the DAS. It looked like the USB bus was getting overwhelmed, and drives were dropping out mid-transfer.
So that didn’t feel too stable.
Switching to Virtual SCSI Disks
I did some digging online and came across a better approach of passing the disks to the VM as virtual SCSI drives instead of USB. I switched to that, tested it out, and everything seemed much more stable.
Problem solved?
Not quite.
New Issue: Drives Won’t Sleep
After running this setup for a while, I realized something odd. The disks were always spinning 24/7 even when they weren’t being used.
Back on my bare-metal OMV install, the disks would go to sleep when idle. Here, they were just staying awake constantly.
Since the disks are only used around few hours of the day, this was kind of wasteful, both in power and disk wear. So I started digging again.
The LVM Fix
Eventually, I found this helpful forum post that pointed me in the right direction.
Turns out, Proxmox’s vgscan was constantly poking the disks, which kept them awake.
Here’s how I fixed that:
- Open the config file on the Proxmox host:
nano /etc/lvm/lvm.conf
- the global_filter section and update it to ignore the drives you want to let spin down.
Before:
global_filter=["r|/dev/zd.*|"]
After (example for a few drives):
global_filter=["r|/dev/zd.|", "r|/dev/sda.|", "r|/dev/sdb.|", "r|/dev/sdc.|", "r|/dev/sdd.*|"]
- Then refresh the volume groups:
vgscan
That got rid of half the problem.
The SMARTD Issue
Even after the vgscan fix, I noticed the drives would still spin up every so often. After some more observation, I realized Proxmox was logging drive temperatures regularly and that meant smartd was waking up the disks just to check on them.
To stop that, I disabled smartd for now:
systemctl disable smartd
That finally let the disks sleep properly when idle.
What’s Next
At some point, I’ll go back and reenable smartd with a custom config. Ideally, I’d like it to run short SMART tests once a week, and leave the drives alone the rest of the time.
For now, I’m happy. OMV is running stable in a VM, data moves fast between VMs, and the drives finally go to sleep when they’re not needed, just like they did on bare metal.