VMWare IO performance better than physical machine, weird ?

本帖最後由 929962616 於 2015-2-16 18:45 編輯

Recently I did a IO performance test against HP DL360 Gen9 with P440ar Raid Controller in RAID 10 mode.
Scenario RAW: Install CentOS 7.0 on physical machine
Scenario VM: Install ESXi 5.5, then install CentOS 7.0 in VM
Below is test result:

It seems VM's IO performance is better than RAW,  is this normal ?
附件: 您需要登錄才可以下載或查看附件。沒有帳號?註冊

Your test data size was different.  In your first test, you data size is 128GB, which is twice as much as your RAM size, so, any caching effect is gone.

However, when you do the test in VM, you just test with a 16GB file, and although you just allocate 8GB RAM to your VM, but the VM host can easily put your 16GB file totally in RAM, and you just see the caching effect.

My 2 cents.

Stephen Wong.


Your explanation makes sense.
I will do another test with same memory  in VM as physical host.


After I assigned 64G memory for VM and ran bonnie++ again, bonnie++ crashed.
No idea on this, seems ESXi could not stand this kind of heavy IO operations.

I changed to use hdparm to do a simple test, test result is :

In VM:
# hdparm -t /dev/sdb

Timing buffered disk reads:  1680 MB in  3.00 seconds = 559.86 MB/sec
# hdparm -T /dev/sdb

Timing cached reads:   40424 MB in  1.99 seconds = 20265.45 MB/sec


In Raw:
# hdparm -t /dev/sda

Timing buffered disk reads: 1488 MB in  3.00 seconds = 495.73 MB/sec
# hdparm -T /dev/sda

Timing cached reads:   22000 MB in  2.00 seconds = 11014.78 MB/sec

Also seems VM's IO was better than Raw.


Again, you're not comparing apple to apple.  So, the comparison and hence conclusion is not meaningful.

I don't understand why your bonnie++ test with 64GB RAM allocated to a VM will fail.  Anyway, if you want, redo the test with:

1) sufficient swap space (say 128GB) for your VM
2) install vmtools in your VM.
3) update your linux kernel / library to the latest version

Or another way to do your test is to go into your BIOS setup page, disable some RAM modules, say, leave 8GB or 12GB active, run VMware ESXi, and create your VM with 8GB RAM, do a test with at least 32GB data, and you'll be able to have some meaningful data.  And also mind the RAM cache on your RAID card if there is any.

Give you a ballpark value:
7200rpm hard disk, sequential read performance on the fastest zone, around 100MB/s (mega bytes per second)

Even you run RAID 10, but if you get anything faster than 300MB/s on real hard disks, you're likely just measuring the speed of RAM.

My 2 cents.

Stephen Wong.


單單只是READ TEST沒有什麼意思


回覆 5# stephenwong

I've no idea why bonnie++ crash, maybe 32bit version is not stable for PAE mode.

For your 3 suggestions, I think:
1. swap space should be not used in the test, so it can be ignored at all.
2. vmware tools may provide some kind of caching in driver, so won't use it when testing.
3. kernel and driver are last version already.

Once again, I installed a VM with CentOS 7 64bit as guest OS, compiled the bonnie++ successfully and finished running without problem.

Below is the result:
  1. [toor@localhost bonnie++-1.03e]$ uname -a
  2. Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
  3. [toor@localhost bonnie++-1.03e]$ ./bonnie++
  4. Writing with putc()...done
  5. Writing intelligently...done
  6. Rewriting...done
  7. Reading with getc()...done
  8. Reading intelligently...done
  9. start 'em...done...done...done...
  10. Create files in sequential order...done.
  11. Stat files in sequential order...done.
  12. Delete files in sequential order...done.
  13. Create files in random order...done.
  14. Stat files in random order...done.
  15. Delete files in random order...done.
  16. Version 1.03e         ------Sequential Output------     --Sequential Input-      --Random-
  17.                                   -Per Chr-  --Block--    -Rewrite-   -Per Chr-    --Block--    --Seeks--
  18. Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
  19. localhost.lo126G 87492  99    710270  97 361314  42 126653  99 539060  80  1263   5
  20.                     ------Sequential Create------ --------Random Create--------
  21.                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  22.               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
  23.                  16 +++++ +++ +++++ +++ +++++ +++ 24303 144 +++++ +++ +++++ +++
  24. localhost.localdomain,126G,87492,99,710270,97,361314,42,126653,99,539060,80,1262.6,5,16,+++++,+++,+++++,+++,+++++,+++,24303,144,+++++,+++,+++++,+++
I agree with Stephen's "faster than 300MB/s on real hard disks just measuring the speed of RAM",  it seems that  I cannot get true disk speed in VM at all.


Are you using raw device for the disk in the VM? If you use image file, the IO in the VM will always be affected by the IO caching in the host.


本帖最後由 crud 於 2015-4-10 00:19 編輯

回覆 8# sunlite

    vm內的hdd,san或local都好,未有filesystem就係raw 碟

    1. raw 碟無 cache (filesystem cache)
    2. raw 碟, 通常, 只有 write buffer (battery backed write buffer)

    咁既設定下,能夠加速的只有 write。當 test result 顯示 VM的read 好過raw 機時,中間 layer (hypervisor, filesystem, hardware) 加速的講法就不成立。

    另外,300MB其實好細,一般家用SSD保守估計有500M,enterprise 既SAN 加埋起碼幾千,PCI-E 或memory channel 的 SSD (fusion IO或ultradimm) 一張千幾二千, 已經有RAM十分一速度

    所以,只留意RAW vs VM係無意思。尤其 VM 行direct path IO的話,其實可以同physical 一樣咁快。