Quantcast
Channel: VMware Communities : Popular Discussions - VMware ESX 4
Viewing all articles
Browse latest Browse all 36074

Mixed memory sizes in Dell R710 to achieve 1333mhz speeds versus 800mhz

$
0
0

We have some Dell R710's that were originally ordered with 48GB in a 12x4GB 1333mhz config (RDIMMs). We ordered 6 additional 4GB 1333mhz memory sticks to fill out all 18 banks for a total of 72GB.  The memory speed then decreased to running at 800mhz.  We immediately had complaints about performance in some of our mission critical VM's (terminal servers and SQL DBs), so we VMotion'd them over to an R710 still on the original 12x4GB config running at 1333mhz and the complaints went away.  I have been talking to both Dell pre-sales and post-sales technical support and they are confirming that filling all 18 banks drops the speed to 800mhz but I'm getting conflicting reports that 12 banks (which is setup as 2 of the 3 available banks in each of the 6 channels) should be running at 1066mhz right now, which it is not. Regardless, both are saying that my best option is to return the recently purchased 4GB modules, and also rip out the original 12x4GB modules and replace all of them with 12x8GB which is 96GB and should run at 1333mhz.  That's really more memory than I need, and almost triple what I already budgeted and spent on buying extra 4GB sticks.

 

The other option they say I could do, and the point of my question, is to mix 6x4GB, and 6x8GB, to achieve 72GB.  I also see this is an offered memory configuration on the Dell website when building an R710, and it says 1333mhz as well.  Pre and post tech support had these things to say, and it makes me a bit nervous because it's not an absolute "you'll be fine":

 

pre-support guy:

"Everything I have read or been told confirms that it is perfectly fine to mix module sizes as long as the modules in each channel are the same size. I think it may be recommended by most to keep the sizes consistent but as I mentioned before we have configurations in our system for new servers to be quoted with mixed sizes so that makes me even more confident that if you went this route you would be fine and experience little to no drop at all in performance."

 

post-support guy:

"As far as your new question about using 6 DIMMs of 4 GB mixed with 8 GB for a total of 72 GB, that is not considered an optimal configuration but I don’t believe you should really run into many issues with it.  I’d mentioned before that if you mix speeds, it will downclock the faster DIMMs to the speed of the slower DIMMs, so definitely would advise against mixing speeds. As far as size, the one thing you might come across is as it’s booting, it might say it’s not an optimal configuration but it should detect all the memory and be able to make use of all of it.  Don’t believe there would really be a performance hit unless you’re missing speeds.  Also, remember to mirror your configuration all across all channels, so you’d want to put all 4 GB and 8 GB in same configuration such as putting all 8 GB in slots 1/2/3 and then putting the 4 GB DIMMs in 4/5/6 on both A and B slots."

 

So the real question is- if it's not an "optimal" configuration, what does that mean to the actual performance of my vSphere servers?  Is anyone doing this and have they had any problems?

 

Thanks,

Ben


Viewing all articles
Browse latest Browse all 36074

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>