An I/O Performance Comparison Between loopback Backed and blktap Backed Xen File-backed VBD

Posted on In Virtualization

I have done some I/O performance benchmark test of Xen DomU. For easier management, some of our DomU VMs are using file-backed VBDs. Previously, our VMs are using Loopback-mounted file-backed VBDs. But blktap-based support are recommended by Xen community. Before considering changing from loopback based VBD to blktap based VBD, I have done this performance bench comparison.

Note: if your VM is I/O intensive, you may consider setting up [[setting-up-lvm-backed-xen-domu|LVM backed DomU]]. Check the [[xen-domus-io-performance-of-lvm-and-loopback-backed-vbds|performance comparison]].

The hardware platform:

DomU:

CPU: 2 x Intel(R) Xeon(R) CPU E5520 @ 2.27GHz

Memory: 1G

HD:

Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
              ext4     16G  2.0G   13G  14% /
/dev/xvda1    ext3    194M   23M  162M  13% /boot
tmpfs        tmpfs    517M     0  517M   0% /dev/shm

Dom0:

The raw image file is stored on a ext4 partition.

Test method

Bonnie++ 1.03c

Using default parameter.

Result

Loopback driver backed:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
vm101         2064M 25511  35 18075   3 199488  47 71094  98 937880  86 +++++ +++
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
vm101,2064M,25511,35,18075,3,199488,47,71094,98,937880,86,+++++,+++,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

blktap driver backed:

Version 1.03c       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
vm101         2064M 69438  96 93549  20 38118  10 54955  76 131645   8 249.1   0
                    ------Sequential Create------ --------Random Create--------
                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 29488  79 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
vm101,2064M,69438,96,93549,20,38118,10,54955,76,131645,8,249.1,0,16,29488,79,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++

From the result we can see that the loopback backed VBD has better read performance with high CPU usage while it has worse write performance. The blktap backed VBD has a more balanced performance. It has a much better write speed than the loopback backed one. With a bit worse read performance, we can get a much better over all performance. So from the view of performance, blktap driver is better than the loopback driver for Xen DomU’s VBD usage.

There are some other benefits we can get by using blktap driver. The loopback file-backed VBDs may not be appropriate for backing I/O-intensive domains because this approach is known to experience substantial slowdowns under heavy I/O workloads, due to the I/O handling by the loopback block device used to support file-backed VBDs in dom0 [1]. Another reason is the blktap can provides better scalability than loopback backed driver. Linux only support at most eight loopback file-backed VBDs across all domains by default. If we want to have more than eight loopback devices, the max_loop=n boot option should be passed to the kernel or module depends on whether CONFIG_BLK_DEV_LOOP is conpiled as a module of the Dom0 kernel. The method can be found at [[add-more-loop-device-on-linux]]. And some other advantages such as easily support for metadata disk formats such as Copy-on-Write, encrypted disks, sparse formats and other compression features, avoid the flushing dirty pages problem which are present in the Linux blktap driver, and some more [2].

Referrences

[1] http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/
[2] http://wiki.xensource.com/xenwiki/blktap

Eric Ma

Eric is a systems guy. Eric is interested in building high-performance and scalable distributed systems and related technologies. The views or opinions expressed here are solely Eric's own and do not necessarily represent those of any third parties.

Leave a Reply

Your email address will not be published. Required fields are marked *