Saturday, January 29, 2011

Xen or KVM? Please help me decide and implement the one which is better

I have been doing research for implementing virtualization for a server running 3 guests - two linux based and one windows. After trying my hands on Xenserver, I am impressed with the architecture and wanted to use the opensource XEN, which is when I am hearing a lot more about KVM, about how good it is and it's the future etc. So, could anyone here please help me answer some of my queries, between KVM and XEN.

  1. Based on my requirement of three VMs on one server, which is better for performance - KVM or XEN, considering one the linux vm's will works a file-server, one as a mailserver and the third one a Windows server?

  2. Is KVM stable? What about upgrades.. What about XEN, I cannot find support for it Ubuntu?

  3. Are there any published benchmarks on both Xen and KVM? I cannot seem to find any.

  4. If I go with Xen, will it possible to move to KVM later or vice versa?

In summary, I am looking for real answers on which one I should use.. Xen or KVM?

  • Red Hat is moving from Xen to KVM. That's certainly swaying my choice for running it under an existing Linux install. On the other hand, there isn't anything like XenServer for KVM.

    Converting between the two is possible but not easy.

    dyasny : 1. It has already moved and released two versions of RHEV 2. RHEV is a full scale management and virtualization suite for KVM.
    daff : Red Hat owns KVM. They bought the company that developed it (Qumranet) in 2008 and put all their effort into KVM since at least then. Red Hat also sponsor the development of Libvirt (the management library for virtualization infrastructure) and related projects like Virtual Machine Manager (virt-manager). Combine that with the fact that Ubuntu (another extremely popular distro) does virtualization based on KVM and the question "KVM or Xen?" becomes moot. At least if you run any kind of productive virtualization environment and cannot afford to roll your own kernels or apply unofficial patches.
    From Bill Weiss
  • I find XEN's handling of mapping block devices to domU vm's far easier to manage, and far more flexable than KVM's. specifically, I manage + create LV's (w/ LVM2) in the dom0, and map them directly to the '/dev/sda1' in the domU.

    With KVM, (As far as I know) I have to export whole partitioned disks. Which means, I have to use partx on the dom0 to 'attach' and 'detach' them.

    I also like, that for lower performance requirements, XEN works on older hardware that doesn't have the VT bit. As far as I know, KVM requires special processor support.

    Unfortunately, I have seen the writing on the wall: RedHat + Ubuntu seem to favor KVM @ this point. W/o Xen in the main kernel tree, and Citrix shipping their own Xenserver product, there doesn't seem to be much momentum behind getting it back into the tree.

    JohnAdams : Oh. After your response, I searched for some solutions on it and found this http://www.linux-kvm.com/content/xen-kvm-disk-management-issues. I am not sure if that works or not. What about using DRBD with KVM?
    Jason : I don't think there's anything preventing you from using DRBD either in the VM or the Bare-Metal Host. Xen is currently my forte, but this is how migration works w/o a fancy SAN: you back the block device on DRBD, and when you have to take out the primary Node, the secondary node can pick right up w/o any service interruption. I"m sure it's on KVM's drawing board, but i'm not sure if it does migration yet.
    daff : I don't understand what you are talking about. Management of block devices is extremely easy with KVM, especially if you are using Libvirt for management. I simply create an LV (LVM) on the virtualization host (there is no "dom0" in KVM-land) and pass it on to the guest that needs it (add three lines in the guests XML definition file). Using virt-manager I can even use a pointy-clicky interface to do that. Restart the guest and be done with it. The guest sees the new block device as /dev/vdb, /dev/vbc, etc. Couldn't be easier. Or did I misunderstand your post?
    Jason : In XEN, I can map an LV to just about any device name I want in the domU. If i wanted to make /dev/chicken, I could define it in my dom.cfg file, and map it to a block device on the dom0 side. (I don't make /dev/chicken, i usually make /dev/sd[abc][123]) I tend to lvcreate -n foo ... ; mount /dev/vg/foo /mnt/t; debootstrap lenny /mnt/t; and *poof* have a more or less complete host to boot up in the vm. w/ KVM (at least my understanding of it) I'd have to create a partition table & MBR on /dev/vg/foo, right? (or have they made that easier to work with?)
    From Jason
  • Xen is better for performance (modifying the kernel for paravirtualization avoids all the instruction traps that must be done in order to make hardware virtualization work), but requires a kernel that can be modified. If you need to run Windows as a guest then you'll have to go KVM.

    Distro support for Xen is dwindling since the patches cannot keep up with the pace of Linux kernel development, whereas the kernel bits of KVM are already fully integrated into the Linux kernel, and the user bits can evolve at their own pace.

    Dan Andreatta : Xen domU code is in the kernel as well. The dom0 was supposed to to be in the kernel by now, but Xen is not Linux...
    daff : KVM has paravirtualized network and block I/O drivers (virtio) which perform extremely well. I did some non-scientific tests (iperf, dd) on Xen and KVM systems running on identical hardware (HP DL380 G6) and if anything KVM performed a little better, network I/O-wise. The most compelling argument is that at the current state of affairs there is simply no way we will ever see Xen integrated in mainline Linux. Suse are the last big distro to build their virtualization infrastructure on Xen and they struggle forward porting the Xen patches to "current" (2.6.27) Linux versions.
  • Xen is a technological deadend, a point which has been discussed many times in all sorts of forums. That is why all the major players are leaving it behind.

    If you want a supported and manageable setup with KVM under the hood, look at RHEV. There are also alternatives - libvirt, proxmox etc.

    From dyasny

0 comments:

Post a Comment