Xen DomU’s I/O Performance of LVM and loopback Backed VBDs
Posted on In Linux, VirtualizationThis posts list benchmark (using bonnie++) result of I/O performance of Xen LVM and loopback backed VBDs.
Table of Contents
The configuration of machines
Dom0
VCPU: 2 (Intel(R) Xeon(R) CPU E5520 @ 2.27GHz)
Memory: 2GB
Xen and Linux kernel: Xen 3.4.3 with Xenified 2.6.32.13 kernel
DomU
VCPU: 2
Memory: 2GB
Linux kernel: Fedora (2.6.32.19-163.fc12.x86_64)
DomU’s profile:
name=”10.0.1.200″
vcps=2
memory=2048
disk = [‘phy:vg_xen/vm-10.0.1.150/vmdisk0,xvda,w’]
#disk = [‘tap:aio:/lhome/xen/vm0-f12/vmdisk0,xvda,w’]
#disk = [‘file:/lhome/xen/vm0-f12/vmdisk0,xvda,w’]
vif=[‘bridge=eth0′]
bootloader=”/usr/bin/pygrub”
#extra=”single”
on_reboot=’restart’
on_crash=’restart’
The “disk” lines is changed depending on the driver used.
Benchmark method
We use Bonnie++ to test the performance of I/O:
# bonnie++ -u root
We run bonnie++ on one single VM. We also test the performance change after making a snapshot of LVM for a new VM. For the file backed VMs, we also run two VMs together on the same hard disk and run bonnie++ on them.
Bonnie++’s result is in this format:
Version 1.03e ------Sequential Output------ --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP localhost.locald 4G 76999 98 107423 21 47522 13 73347 91 159847 16 266.0 0 ------Sequential Create------ --------Random Create-------- -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ localhost.localdomain,4G,76999,98,107423,21,47522,13,73347,91,159847,16,266.0,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
We only list the last line which contains all the result in the result section in this post.
Benchmark result
LVM backed VBD
localhost.localdomain,4G,76999,98,107423,21,47522,13,73347,91,159847,16,266.0,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,79588,98,120078,22,46140,13,75343,94,150167,15,248.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,81942,98,113617,22,47736,13,75947,94,152110,15,262.1,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
New LVM logical volume made by snapshot
localhost.localdomain,4G,11846,15,12044,2,27133,7,71510,92,141408,14,262.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,12200,15,18147,3,33086,9,66687,89,146550,14,251.9,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,58521,73,58482,10,33880,9,69399,90,144237,14,267.4,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,62553,78,57576,11,32755,9,70037,89,143462,14,259.9,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,66031,84,65640,12,34357,9,66036,86,152171,15,266.4,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ localhost.localdomain,4G,58666,75,60092,11,34826,9,72821,91,141328,14,259.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
File backed VBD
vm112,4G,20865,27,23559,4,32913,9,63006,81,128395,13,217.9,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,23022,30,18611,3,30086,8,63784,82,125736,13,197.7,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,21485,27,20366,3,29587,8,72130,92,140957,14,239.6,0,16,+++++,+++,+++++,+++,30751,52,+++++,+++,+++++,+++,+++++,+++ vm112,4G,22375,32,21716,3,30300,8,65488,87,128625,13,221.4,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,21968,28,19298,3,29007,8,68469,88,122111,12,222.5,0,16,26967,94,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,21477,28,20463,3,38395,10,49312,63,154206,15,241.0,0,16,+++++,+++,+++++,+++,32699,56,+++++,+++,+++++,+++,+++++,+++
Two VMs on the same disk running together
A:
vm112,4G,10645,13,9498,1,9606,2,30866,41,86911,8,100.1,0,16,9181,22,20583,6,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,10623,13,10143,1,10485,2,26013,35,77362,7,116.3,0,16,25701,66,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,10824,14,9558,1,12028,3,27503,36,57196,5,92.1,0,16,9679,28,+++++,+++,9294,15,+++++,+++,+++++,+++,+++++,+++ vm112,4G,15098,19,10485,1,12536,3,22771,30,64679,6,142.2,0,16,32006,82,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,11315,14,9674,1,12052,3,26453,35,68206,7,121.1,0,16,23789,62,32446,13,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm112,4G,11865,15,11564,2,11508,3,24945,34,61946,6,102.6,0,16,13297,34,21805,7,+++++,+++,+++++,+++,+++++,+++,+++++,+++
B:
vm119,4G,8963,11,9255,1,10909,3,36446,48,70485,7,125.1,0,16,19701,52,23649,8,9574,16,+++++,+++,+++++,+++,+++++,+++ vm119,4G,9074,12,8410,1,12898,3,35266,47,68469,7,107.4,0,16,6585,17,3206,1,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm119,4G,9151,13,8664,1,10285,2,20120,28,58011,5,90.9,0,16,22894,59,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm119,4G,9053,11,10406,1,12852,3,27618,37,55405,5,108.8,0,16,10144,24,22599,7,+++++,+++,+++++,+++,+++++,+++,+++++,+++ vm119,4G,8987,11,11464,1,12123,3,19278,26,59274,6,104.8,0,16,5357,13,23010,7,+++++,+++,7922,18,+++++,+++,+++++,+++ vm119,4G,9593,12,11450,1,30598,8,57078,73,119884,12,222.2,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
Conclusion
From the evaluation results, we can see that LVM backed VBD of Xen DomU has a much better performance than file backed VBD. From our experiment with Xen with LVM for more than 2 years in our cluster, LVM backed VBD is also quite stable.
Hello,
What is the recommended base configuration for XEN Host and VM in Centos 7.2?
On server total we have 3 hard disk, 2 x 2 TB and 240Gb SSD.
Our setup is like host(dom0) on 2 TB HDD and VM (domU) on SSD.
On host (2 TB HDD) – partition is like /boot and rest of the partition on LVM.
On SSD – Virtual Machine (domU) is also LVM.
So now concern both dom0 and domU is on LVM so does it create I/O performance issue?
Thanks,
Nishit Shah
Your configuration looks pretty good. LVM is lightweight and the limitation is usually the underlining I/O device/channel.
Thank You.
Another Question?
I have created two guest VM on SSD drive with Centos 7.2. SSD drive is on LVM. Now inside first VM partition is LVM based XFS filesystem and inside second VM partition is standard based XFS filesystem.
The boot time of first VM is around 4 to 5 minutes and second VM boots in less than 30 seconds.
The boot time of first VM is really worried me on production server. Is it because of partition created on LVM inside first VM? Inside first VM console I have notice that xenbus_probe_frontend
hangs the OS boot for up to 5 minutes as it check some devices on system.
Any idea how to fix this or some workaround?
Thanks,
:Nishit Shah