Quantcast
Channel: VMware Communities : Popular Discussions - VMware ESX 4
Viewing all 36074 articles
Browse latest View live

Downloading ISO images with SSH

$
0
0

I am trying to download ISO images straight onto VMWare using SSH wget. Every seems to work fine but it can't download more then 150mb, i really need this to work as downloading and then uploading will be a lot of work. I get the error message    "get: short write".

 

Can anyone help? Thanks Advanced


Help to do RDM to local RAID array

$
0
0

Hi guys,

 

I'm trying to get raw access to a RAID volume on my ESX server (connected to an Adaptec 5805 RAID controller), but it seems much more difficult than I expected. The array holds an NTFS partition. ESX4 is installed on a regular SATA disk (/dev/sdb).

 

 

Do you have any ideas as to what I'm doing wrong?

 

 

Let me start with some information about the array:

 

 

root@localhost test# fdisk -l

 

 

Disk /dev/sda: 39.6 GB, 39625687040 bytes

255 heads, 63 sectors/track, 4817 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

 

Device Boot Start End Blocks Id System

/dev/sda1 1 4817 38692521 7 HPFS/NTFS      <---- This is the NTFS partition that I would like to have raw access to.

 

 

Disk /dev/sdb: 160.0 GB, 160041885696 bytes   <---- ESX4 is installed onto this seperate harddisk which is located on one of the motherboard SATA ports (not the Adaptec controller)

64 heads, 32 sectors/track, 152627 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

 

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1   *           1        1100     1126384   83  Linux

/dev/sdb2            1101        1210      112640   fc  VMware VMKCORE

/dev/sdb3            1211      152627   155051008    5  Extended

/dev/sdb5            1211      152627   155050992   fb  VMware VMFS

 

 

Disk /dev/sdc: 8304 MB, 8304721920 bytes

255 heads, 63 sectors/track, 1009 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdc1               1         117      939771   82  Linux swap / Solaris

/dev/sdc2             118         372     2048287+  83  Linux

/dev/sdc3             373        1009     5116702+   5  Extended

/dev/sdc5             373        1009     5116671   83  Linux

 

-


 

root@localhost test# ls -l /vmfs/devices/disks/

total 545021409

-rw------- 1 root root 39625687040 Aug 5 20:22 mpx.vmhba1:C0:T0:L0

-rw------- 1 root root 39621141504 Aug 5 20:22 mpx.vmhba1:C0:T0:L0:1

-rw------- 1 root root 160041885696 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP

-rw------- 1 root root 1153417216 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:1

-rw------- 1 root root 115343360 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:2

-rw------- 1 root root 158772232192 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:3

-rw------- 1 root root 158772215808 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:5

lrwxrwxrwx 1 root root 19 Aug 5 20:22 vml.0000000000766d686261313a303a30 -> mpx.vmhba1:C0:T0:L0

lrwxrwxrwx 1 root root 21 Aug 5 20:22 vml.0000000000766d686261313a303a30:1 -> mpx.vmhba1:C0:T0:L0:1

lrwxrwxrwx 1 root root 72 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:1 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:1

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:2 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:2

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:3 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:3

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:5 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:5

 

 

 

 

-


 

root@localhost test# esxcfg-scsidevs -l

mpx.vmhba1:C0:T0:L0

Device Type: Direct-Access

Size: 37790 MB

Display Name: Local Adaptec Disk (mpx.vmhba1:C0:T0:L0)

Plugin: NMP

Console Device: /dev/sda

Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T0:L0

Vendor: Adaptec Model: RAID0 Revis: V1.0

SCSI Level: 2 Is Pseudo: false Status: on

Is RDM Capable: false Is Removable: false

Is Local: true

Other Names:

vml.0000000000766d686261313a303a30

 

 

Here's what I've tried:

 

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30'

Failed to create virtual disk: Invalid argument (1441801).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30:1'

Failed to create virtual disk: Invalid argument (1441801).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0'

Failed to create virtual disk: Invalid argument (1441801).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0\:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0\:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:1'

Failed to create virtual disk: Invalid argument (1441801).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0\:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

 

 

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0\:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0:1'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

 

 

Any ideas? I remember reading a post once where a person stated that one cannot make an RDM to a VMHBA device?

 

 

Thanks in advance,

 

 

Jesper

Alerts and warning in vCenter.

$
0
0

I logged into vCenter this morning and I've found 2 alerts and 1 warning they are below.

 

1. Alert - Memory Device 12 DIMM 12 Status: Uncorrectable ECC          CurrentState: Assert

2. Alert - Processor 2 CPU 2 Status 0: IERR                    CurrecntState: Assert

 

1. Warning - System Board 1 PCIe Status: Fault Status                CurrentState: Asset

 

 

Now If I acknowledge these errors I assume that doesn't mean they are fixed.. but I am curious of what I need to do to fix these or at least where I can look to get more details about them.

 

Thanks!

Advanced parameter Disk.DiskMaxIOSize ???

$
0
0

Customer has ESX 4 hosts.  they have all of their servers set with advanced parameter = Disk.DiskMaxIOSize set to 32.  The default value is 32767.

 

I can't find documentation on this parameter.  Does "32" make sense, or, are they throttling their I/O to a 32-byte block size???

 

Thanks,

Eric

ESX 4.1 and 10GBit NICs

$
0
0

Hello!

 

I have a problem with 10gbit NICs in ESX 4.1 and their performance.

 

The setup is the following:

We have 2 ESX 4.1 hosts (DL380 G5 and DL120 G7) which should be directly connected with HP NC522SFP 10GBit cards.

 

The network cards are connected with eachother and i configured a second vswitch for this network. When I now send a file from one host to another, it is very slow as in the normal 1GBit network.

 

On vCenter Server I see, that everything is setup as 10.000MBit.

esxcfg-nics -l also shows a performance of 10.000MBits.

 

I tried the whole setup on a SLES11 SP1 environment and everything works fine. So there must be one thing I forget with VMware.

 

 

Thanks and greetings from Austria!

Daniel

VM’s DNS registration deleted when upgrade VMtools

$
0
0

Hello.

 

We're having issues in one environment with some Windows VMs. I'm going to explain the scenario:

 

Previous scenario:

- All the hosts were ESX 4.0.

- VMs had several E1000 NICs.

- VMware Tools were out of date.

 

So, we've done an upgrade of those points in this way:

1) Hosts were upgraded to ESX 4.1 u2 through the Update Manager.

2) Every E1000 NIC was replaced by VMXnet3.

3) The last step was upgrade VMware Tools (with the mandatory reset).

 

After that we're finding issues with some Windows VMs (Server 2003 and 2008): the VM asks for DNS deletion of its A and PTR records every hour until the VM is rebooted.

Running "ipconfig /registerdns" is a temporary solution because the records are created but also deleted because of the behaviour explained above.

 

The only solution is rebooting the VM... After that its DNS register appears to be OK in the DNS server.

 

So we need to know the source of the problem (the VMXnet3 driver, the tools upgrade, etc.). I've only found two guys with the same problem and the same solution.

Any ideas?

 

Thank you very much.

Increasing netapp lun size

$
0
0

we have esxi4. 1 hosts connected to netapp filer via FC.  On the netapp we have luns on volumes, and these luns on esx are mapped as vmfs datastores on ESX.  One of the luns is running out of space, but the volume it is on has space, so i was going to increase the size of the LUN, which you can do on the fly on netapp, but on esx, would it automatically pick up the new lun size.  I dont want to add any extents, i just want the current lun to increase in size automatically.

 

Any advise.

Very poor iSCSI performance with HP DL160 G6 and ESX 4.0

$
0
0

I am experiencing very poor iSCSI performance with HP DL160 G6 servers and ESX 4.0 and would welcome help on this matter.

 

I have setup iSCSI on 3 * ESX 4.0 hosts.

Each have a corresponding single target and single LUN on the SAN. The LUNs are formatted as fmfs3 types.

Each host is connected to the QNAP SAN in the same way via a Gigabit switch.

All NICs operate at 1Gb.

To test performance I am using vmkfstools to clone a disk from the local store to the LUN

 

On the first host (a DL140 G3) performance is good at 48MB/s.

 

On the 2 DL160 performance is very poor and barely achieve 2MB/s

 

When cloning, the DL160 servers also repeatedly log the following in /var/log/wmkwarning:
servername vmkernel: 0:05:23:53.214 cpu2:4308)WARNING: NMP: nmp_DeviceRetryCommand: Device "naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx": awaiting fast path state update for failover with I/O blocked. No prior reservation exists on the device.
servername vmkernel: 0:05:23:54.195 cpu3:4210)WARNING: NMP: nmp_DeviceAttemptFailover: Retry world failover device "naa.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" - issuing command 0x4100051a20c0

 

I have made sure that the servers are up-to-date:
DL160 G6 Configuration:
ESX 4.0.0 build-702116
2 * HP NC362i Gb nics (firmware 1.7.2, driver 1.3.19.12.1)
1 * Smart Array P212 Controller (firmware 5.14)

 

SAN Server Configuration:
QNAP Ts-459u-rp (firmware 3.6.1 Build 0302T)

 

I have searched everywhere for an answer.
I have also tried the "esxcfg-advcfg -s 1 /VMFS3/FailVolumeOpenIfAPD" as suggested on some posts.
All to no avail.

 

Any help / advise would be appreciated!


In-Guest iSCSI vMotion Support

$
0
0

Hi, we're using in-guest iSCSI using Microsoft iSCSI Initiator to host our Exchange 2010 volumes. We just noticed that snapshots are not supported, can anyone advise if vMotion is supported with this configuration and does anyone know if snapshots will be supported in ESX5?

ipmi_si_drv module cannot loaded and server hangs

$
0
0

hi all,

 

I installed VMWare ESXi 4.0 in Sun Fire Machine and used for quite a long time. But suddenly following error occured.

"initialization for ipmi_si_drv failed"

Once the server booted it will loaded the all the modules except ipmi_si_drv module.

 

Afterward server get hanged and pink color screen will appear. I already intalled Linux and Windows servers on top of this.

 

How can I solve this issue ?  your responses are highly welcome.

 

Thanks

ESX 4.x official or community driver for Neptune-based NIC?

$
0
0

G-d bless corporate procurement.  Give 'em a simple purchase req, and they'll substitute liberally every time.

 

Recently acquired some new quad-port gigabit cards (Sun x8 Express Quad Gigabit Ethernet UTP low-profile, part number 511-1422-01).  ESX 4.0 Update 4 has no driver support for these beasties as far as I can tell, but "lspci" at least recognizes them as follows:

 

vendor: 108e (Oracle/SUN)

device: abcd (Multithreaded 10-Gigabit Ethernet Network Controller)

 

Chipset is Neptune (actually Neptune2), and this NIC is evidently supported in Linux kernels 2.6.24 and later by the "niu" driver.  Verified that CentOS 6.2/6.3 can see the card and talk to it.  Don't be confused by the "10-Gigabit" label: the chipset can do it, but it really is installed in a 1-Gbit harness.  More Linux-specific info at this link: http://catee.net/lkddb/web-lkddb/NIU.html

 

Before someone suggests it, exchanging these cards for something on the ESX 4 HCL isn't an option: we're stuck with them :-(.

 

Took a really quick look at what might be required to put together an official-looking driver installation CD for ESX, and that's why I'm asking if someone else has already gone through that pain :-).  If it simplifies the problem and/or potential solution somewhat, these NICs aren't causing any kind of "failure to install" issue: ESX sees the built-in NICs on our servers just fine.

 

Thanks in advance for any help/guidance.

 

--Bob

Can a "dead" path be revived?

$
0
0

Hi folks,

 

On our ESX 4.1.0  server, this morning a iSCSI LUN failed. It appears as a device, but the corresponding path shows as "dead" rather than "active".

 

vmkiscsi.log reports:

 

2012-12-20-14:25:29: iscsid: Login Failed: iqn.2004-04.com.qnap:ts-439u:iscsi.i360voipam.bd46d4 if=d
efault addr=192.168.100.11:3260 (TPGT:1 ISID:0x3) Reason: 00080000 (Initiator Connection Failure)

 

Is this salvageable?

 

The iSCSI device manager (vendor is QNAP)  shows no errors at all. The same storage system has other targets and LUNs that have not been affected.

 

We tried moving the problematic LUN to a new target, but the host cannot discover the new target, it's as if it doesn't exist.

 

Thanks for any help, this is critical for us.

 

George

[VMware vCenter - Alarm Host connection state] Host connection state changed status from Green to Red

Create New Virtual Machine is greyed out

$
0
0

I can't create a new virtual machine in vSphere client.  Right clicking the host or going to File > New Virtual Machine is greyed out.  It seems like most administrative functions are greyed out, this could perhaps be a permissions issue. Can anyone help?  A few things:

 

1. This is free ESX, I don't have vCenter

2. I'm logged in as root user

3. The host is not in maintenance mode

4. I tried running the service mgmt-vmware restart command from PuttY

Rescan doesn't show new LUNs on FC HBA...need to reboot?

$
0
0

On our ESX 4.1.0 boxes, we attached an unused FC HBA from each server to a new SAN and presented LUNS.

 

The HBA's always did show up in vCenter, but I expected that after doing a "Rescan All" the new LUNs would show up as devices on those HBAs, but they don't.

 

I'm told that "Rescan" never works for us, and the server needs to be rebooted.  Is that true?  Seems like a reboot shouldn't be necessary.

 

Thanks....


VM Multiple NIC VMOTION problem

$
0
0

Curently running ESXi and have the following setup with 3 virtual machines;

 

Nic1 = 10.10.10.10 to 12/24 GW 10.10.10.1 - VLAN 50 (Layer 3)

Nic2 = 192.168.99.10 to 12/24 No GW set - VLAN 60 (Layer 2)

 

The 3 VM's run on a host farm consisting of 5 hosts and all hosts are setup with the above VLAN's.

When I VMotion to another host, the vm that is VMotioned is not reachable by the other two vm's.

 

lets say I vmotioned a VM with the secondary ip of 192.168.99.10. When I run a tracert to another vm at ip address192.168.99.11

the route being used is the NIC with the 10.10 address, even though the IP is directly connected and one hop away.  The only

way that I have found to clear this is to disconnect the NIC from the VM and reconnect. Once I reconnect and run that tracert

cmd again, it than shows a direct hop to the 192 ip address and I am able to ping that host. Disabling the NIC in the OS does not

resolve the problem. Under normal circumstances, everything works as it should where the 192 addresses communicate directly.

The problem only occurs when I VMotion.

 

Any thoughts or assistance would be appreciated.

 

thanks in advance

Storage vMotion + vmdk files in different folders

$
0
0

Hello,

 

Will storage vMotion move the vmdk files [in 2 different folders on the same datastore at the moment] into the same folder while migrated into the different datastore?

 

Why the vmdk files are in 2 different folders:

 

Windows 2008r2 was misconfigured badly so while rebooting it was going to the System Recovery. Wasn't able to recover the system.

The system was rebuilt and 2 drives D: & E: were attached ["Use an existing virtual disk"] as existing vmdk drives to the new VM.

 

So drive C: vmdk [rebuilt VM] file is in new-vm-folder & drives D: & E vmdk files are in old-vm-folder.

 

So "Will storage vMotion move the vmdk files [in 2 different folders on the same datastore at the moment] into the same folder while migrated into the different datastore?" I know that storage vMotion and re-naming files/folders works well.

If not what would be the best practice to get is sorted?

 

Many thanks,

Pawel

VMICTimeProvider?

$
0
0

I'm investigating a time shift issue with several 2008 R2 guests on vSphere 4.0.

 

 

 

Time synch between guest and ESX has been disabled in VMTools but one of the customers engineers has asked me to double check as "w32tm /query /configuration" indicates that

 

VMICTimeProvider.dll is enabled.

 

 

 

The registry indicates the same:

 

 

 

HKLM\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\VMICTimeProvider

 

 

 

VMICTimeProvider is not even mentioned on the above in physical machines.

 

 

 

I have read umpteen references to HyperV and VMICTimeProvider but I'm struggling to find any reference involving vmware.

 

 

 

So my question it what is VMICTimeProvider and what role does it have if any in a VMware guest?

 

 

 

Any help would be appreciated.

LUN detection and Rescan issue

$
0
0

Issues:

1. A datastore disconnected from the ESX hosts and cannot be discover any longer.

2. Also, rescanning my for LUN from ESX hosts takes too long.

**I have a cluster of 3 ESX 4.0 and I am connecting to Netapp using FC SAN**

 

Who has a solution to this issue and could the two issue be linked?

VM Replica's Inaccessible after Network Failure

$
0
0

Hi guys

I'm quite new to VMware and am still learning everyday... but have been put into a Live Environment which I now have to support

 

The problem I have right now is that we had a power failure last Friday, and after that, my Host machines aren't seeing the VM Replica's which sit on my NAS Device.

I am running VMware ESX 4.1.0 with 3 Host machines, and a Server which is running the vSphere Client... all in my Primary Server Room.

In my DR Server Room, on the other end of the building, I have an HP Storageworks NAS Device, on which my VM Replica's reside.

 

After the last power failure, and after my UPS's ran out of juice, my Host machines are no longer seeing the replicas on the NAS device.

I have restarted my switches, restarted the NAS Device, restarted my vSphere Client Host machine... but i still get the replica(inaccessible) message next to each of my replica's within the vSphere Client.

 

Has anybody had this issue before? Anyone know how to resolve this without having to restart the actual Host machines themselves?

I am trying to avoid restarting my Host Machines as those VM's need to run 24/7.

However, should that be the only option, then I will schedule downtime and have it done.

 

Any assistance is highly appreciated guys. Thank you in advance.

Viewing all 36074 articles
Browse latest View live


Latest Images