tag:blogger.com,1999:blog-6650115505731870156.post3615713130403335721..comments2022-11-30T03:51:52.848-06:00Comments on Virtual Optics: Why VMware over Netapp NFSAnonymoushttp://www.blogger.com/profile/07522825210957557149noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-6650115505731870156.post-8332368960657466392009-10-01T09:11:57.712-05:002009-10-01T09:11:57.712-05:00NetApp & VMware have released a whitepaper det...NetApp & VMware have released a whitepaper detailing their performance testing of FC, iSCSI & NFS. You can din it here:<br /><br /><a href="http://media.netapp.com/documents/tr-3697.pdf" rel="nofollow">http://media.netapp.com/documents/tr-3697.pdf</a>Scotthttps://www.blogger.com/profile/08785160111186874744noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-81622238657765767172009-03-30T14:35:00.000-05:002009-03-30T14:35:00.000-05:00I'd meant to post on this earlier as I've found th...I'd meant to post on this earlier as I've found this post very helpful. Besides my own use, I also linked to it here along my own summary.<BR/><BR/>http://communities.netapp.com/message/8904<BR/><BR/>Thanks again.Andrew Milerhttps://www.blogger.com/profile/06263475416850154092noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-21010190489843264992009-01-04T22:05:00.000-06:002009-01-04T22:05:00.000-06:00Rolando what filer do you have? Also how many disk...Rolando what filer do you have? Also how many disk in your aggregate? 128MB/s is the limit on your 1Gbit network, so your bottle neck in the network... Step up to 10gbit on your filer to up your raw sequential speed. However most of the data on VMs travel using 4K blocks, so you should compare FC to NFS using your real vm's and not some testing tool...Anonymoushttps://www.blogger.com/profile/07522825210957557149noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-317287505330287592009-01-04T21:18:00.000-06:002009-01-04T21:18:00.000-06:00To share with you, I run my environment with FC 2G...To share with you, I run my environment with FC 2Gb/s on CX3-80. We use 300GB per VMFS with Meta Lun concept on EMC technology. I am able to achieve 190MB/s easily with my FC. When I try the benchmark on the NFS box I have now, I only manage to get the performance at Max 120MB/s average. Agree with you that we may not require such a high performance, but is also depend what you try to virtualize. I am aiming for more high load machine to be virtualize, and to sincere with you, Equallogic is catching up in term of featurs, pricing and performance.Craighttps://www.blogger.com/profile/12630426271665608559noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-45604431223821760182008-09-18T07:38:00.001-05:002008-09-18T07:38:00.001-05:00Just setup VMWare with NFS and working fine but un...Just setup VMWare with NFS and working fine but unable to see Disk I/O Performance stats for any VM's i create. I can see disk stats fine if i use local storage. Any ideas?franklyfrank21https://www.blogger.com/profile/08841066393381971794noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-37604908435994032282008-09-18T07:38:00.000-05:002008-09-18T07:38:00.000-05:00Just setup VMWare with NFS and working fine but un...Just setup VMWare with NFS and working fine but unable to see Disk I/O Performance stats for any VM's i create. I can see disk stats fine if i use local storage. Any ideas?franklyfrank21https://www.blogger.com/profile/08841066393381971794noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-41091958911831096732008-02-24T21:07:00.000-06:002008-02-24T21:07:00.000-06:00Here's another one....With NFS, if you have to fai...Here's another one....<BR/><BR/>With NFS, if you have to failover to your SnapMirrored copies, you don't have to deal with LUN resignaturing as you do with iSCSI/FC!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-17711067375351457022008-02-23T02:49:00.000-06:002008-02-23T02:49:00.000-06:00Hi Adrian,Just curious on your success. Any issues...Hi Adrian,<BR/><BR/>Just curious on your success. Any issues so far? <BR/><BR/>We have just implemented a solution like yours here. IBM nSeries n3600 (your FAS2050 Active/Active) but only 3 ESX hosts. 4 x GigE to each controller, multilink IP hash loadbalancing implemented and jumbos turned on. No issues so far.<BR/><BR/>Have you done any performance tests yet?Unknownhttps://www.blogger.com/profile/13549709467761934550noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-43909867819370317682008-02-06T19:45:00.000-06:002008-02-06T19:45:00.000-06:00Thanks for the response...Do you have any other in...Thanks for the response...<BR/><BR/>Do you have any other info regarding the extent issues?<BR/><BR/>Anything that you can link me to?<BR/><BR/>Thanks,<BR/><BR/> -- DaveDave Wujcikhttps://www.blogger.com/profile/14395692162201782457noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-61159439893585782622008-02-06T18:48:00.000-06:002008-02-06T18:48:00.000-06:00Dave, from what I hear, you may want to avoid usin...Dave, from what I hear, you may want to avoid using more than 2 extents per VMFS volume;<BR/><BR/>I haven't tested 3.5 for issues, but ESX 3.0.2 has been known to have issues correlating with multiple extents. Generally this is seen as a temporary solution - and with storage vmotion, the migration to a extent-free VMFS datastore may be much more appealing.Shehttps://www.blogger.com/profile/16794910807267711407noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-25024210849546040772008-01-25T13:13:00.000-06:002008-01-25T13:13:00.000-06:00I'm curious about your SAN benchmarks...I've got a...I'm curious about your SAN benchmarks...<BR/><BR/>I've got an EMC Clariion CX-700 and a DMX3 with VMWare volumes on each.<BR/><BR/>In my testing, I could easily pull 90-95MB/sec sustained reads w/o issue on both...and this was not on isolated systems...these were the current in-use systems.<BR/><BR/>I also discovered a little "gotchya" that destroys performance #s on the DMX3 (Due to it's internal layout), where the larger the LUN you create, the slower your read performance is.<BR/><BR/>I found the best setup for performance on the DMX3 was to share out bare meta luns (28.6GB) and then let ESX tack them all together ala extents. This avoids any possiblity of internal "plaiding" (stripes going more than one way at the same time, exponentially increasing I/O events).<BR/><BR/>It turned out to be a "pretty darned fast(TM)" implementation that sometimes out-paces our physical hardware (depending on the task being performed).<BR/><BR/>I did all of this over 2GB FC.<BR/><BR/>How did you do your testing and what was your setup?<BR/><BR/>Thanks,<BR/><BR/> -- DaveDave Wujcikhttps://www.blogger.com/profile/14395692162201782457noreply@blogger.comtag:blogger.com,1999:blog-6650115505731870156.post-1918708820537477772007-12-18T01:26:00.000-06:002007-12-18T01:26:00.000-06:00We're trying this out on a smaller scale in Jan 08...We're trying this out on a smaller scale in Jan 08 with a Netapp 2050 and (4) ESX hosts. Hopefully it works well. <BR/><BR/>I notice that ESX 3.5 supports jumbo frames but not for NAS (NFS) from what I can tell. :-(<BR/><BR/>Any thoughts on whether having jumbo frames enabled would cause problems or performance issues in this case?Anonymousnoreply@blogger.com