I'm having a weird issue with my WD Sharespace device. I use a WD Sharespace to mount as an NFS device to store all of our ISO and install files. I have one ESX 4 cluster with 5 servers, and two standalone ESX 4 servers. The SHarespace NAS has had some custom config work done to it to allow this to work (no_root_squash added to the shares in the exports file to allow ESX NFS to work)- and it was working fine, until about a month ago. I did an upgrade to the Sharespace NAS, and after the reboot (and reestablishing the no_root_squash in the exports file), I found I could not mount the NFS shares on the NAS to any server in the ESX cluster. I get
"Create NAS datastore: server1.company.local: Error during the configuration of the host: Cannot open volume: /vmfs/volumes/a4e52396-dcdec3f8"
I do not get this error on the two standalone ESX servers, I was able to mount the NFS shares just fine.
I've gone through and checked the security profiles- they all seem correct, NFS client is allowed.
During troubleshooting, I traced the volume that ESX said it couldn't open. Found out what it was, migrated all VM's from it, and then removed the data store from the system. I get the same error, different data store. It seems that the last data store gets a volume ID with only two sets of characters (the rest have four (like 4d3e6612-a6616638-0e9d-001b211632b5). Not sure how it's related- I get the same kind of listing on the standalone ESX servers, and the NFS share mounted just fine.
I checked the hosts.allow file, it lists the servers in the cluster, as well as the standalone servers. hosts.deny has one line, "portmap:ALL".
Any clues?