檢查在 Debian / Ubuntu 環境中虛擬化功能的支援 (kvm, LXC)

使用 KVM (Kernel-based Virtual Machine)、Xen、LXC(Linux Containers) 等虛擬化技術時經常需要 linux kernel 或是 CPU 上的硬體支援,像 KVM 這種高度依賴硬體協助虛擬化 (Hardware-assisted virtualization) 例如 AMD-V 或是 Intel VT-x 的技術,最好是在使用前就先確認是否在系統上有被支援,才不會使用起來才發現效能慢得很痛苦,硬體協助虛擬化除了本身硬體支援、也需要相關的設定需要在主機板 BIOS 上啟用及在軟體上的支援,這邊筆記一下在 Ubuntu 上怎麼樣快速的確認是否自己的環境有虛擬化時會用到的功能支援,軟體虛擬化技術的部份會以 KVM / LXC 為主。

1. 檢查 CPU 是否支援硬體虛擬化技術,透過 cpuinfo 裡面的 flag 來確認:

$ grep flag /proc/cpuinfo | uniq | grep vm
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm vnmi ept fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
  • AMD 有支援 AMD-V 技術的處理器應該會看到一個 svm 的 flag
  • Intel 的 VT-x 技術對應的 flag 則是 vmx
  • 其他例如 ARM, PowerPC 架構的處理器可以參考這邊的資訊:

2. 檢查 kvm kernel module 是否已經載入,以 Intel 處理器為例:

$ lsmod | grep kvm
kvm_intel 172032 0
kvm 540672 1 kvm_intel
irqbypass 16384 1 kvm

(如果是 AMD 的處理器要看到的應該會有 kvm_amd 而不是 kvm_intel )

另外我們也可以藉由 kvm-ok 這個工具來看看 kvm 環境的支援情形,要使用 kvm-ok 請先使用 apt 安裝 cpu-checker 這個套件 (Ubuntu 有包,但 Debian 沒有)。

以下分別是有有/無支援的輸出結果:

$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
$ sudo kvm-ok
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used

如果處理器本身有支援對應的虛擬化技術,但顯示為沒有支援,或是 kvm 模組沒有被正常載入,可以試試看手動載入相關的模組 (注意有 intel, amd 字樣的模組請對應自己的處理器品牌):

$ sudo modprobe kvm
$ sudo modprobe kvm_intel
$ sudo modprobe kvm_amd

3. 同時檢查包含 QEMU / LXC 虛擬化技術相關的支援及細節

這邊使用的工具是 virt-host-validate ,現在不管是在 Ubuntu 或是 Debian 上都有打包好的套件可以直接安裝使用,透過這個工具可以看到更多虛擬化技術的支援細節,要使用 virt-host-validate 這個工具請用 apt 安裝 libvirt-bin 這個套件。virt-host-validate 使用基本上不需要特別帶參數,除非你只想針對特定一種虛擬化技術做確認,執行結果範例及說明在下方:

沒有支援硬體虛擬化的結果 (libvirt-bin v1.2.x in Ubuntu 14.04.5 / Debian 8.6):

QEMU: Checking for hardware virtualization : WARN (Only emulated CPUs are available, performance will be significantly limited)
QEMU: Checking for device /dev/vhost-net : PASS
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux <= 2.6.26 : PASS

有支援硬體虛擬化的結果 (libvirt-bin v1.2.x / Ubuntu 14.04.5 / Debian 8.6):

$ sudo virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking for device /dev/kvm : PASS
QEMU: Checking for device /dev/vhost-net : PASS
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux <= 2.6.26 : PASS

另外也有可能出現需要另外載入 kernel module 的提示訊息,例如 vhost_net ,不過我是因為忘了用 root 權限才看到這個訊息的,有些檢查需要使用到比較高的權限才能確認 (libvirt-bin v1.2.x / Ubuntu 14.04.5 / Debian 8.6):

$ sudo virt-host-validate
QEMU: Checking for hardware virtualization : PASS
QEMU: Checking for device /dev/kvm : PASS
QEMU: Checking for device /dev/vhost-net : WARN (Load the 'vhost_net' module to improve performance of virtio networking)
QEMU: Checking for device /dev/net/tun : PASS
LXC: Checking for Linux <= 2.6.26 : PASS

到了 Ubuntu 16.04 , libvirt-bin v1.3.1,結果變得更加的詳細,大致如下,就不分別貼多個版本了:

QEMU: Checking for hardware virtualization : WARN (Only emulated CPUs are available, performance will be significantly limited)
QEMU: Checking if device /dev/vhost-net exists : PASS
QEMU: Checking if device /dev/net/tun exists : PASS
QEMU: Checking for cgroup 'memory' controller support : PASS
QEMU: Checking for cgroup 'memory' controller mount-point : PASS
QEMU: Checking for cgroup 'cpu' controller support : PASS
QEMU: Checking for cgroup 'cpu' controller mount-point : PASS
QEMU: Checking for cgroup 'cpuacct' controller support : PASS
QEMU: Checking for cgroup 'cpuacct' controller mount-point : PASS
QEMU: Checking for cgroup 'devices' controller support : PASS
QEMU: Checking for cgroup 'devices' controller mount-point : PASS
QEMU: Checking for cgroup 'net_cls' controller support : PASS
QEMU: Checking for cgroup 'net_cls' controller mount-point : PASS
QEMU: Checking for cgroup 'blkio' controller support : PASS
QEMU: Checking for cgroup 'blkio' controller mount-point : PASS
QEMU: Checking for device assignment IOMMU support : WARN (Unknown if this platform has IOMMU support)
LXC: Checking for Linux <= 2.6.26 : PASS
LXC: Checking for namespace ipc : PASS
LXC: Checking for namespace mnt : PASS
LXC: Checking for namespace pid : PASS
LXC: Checking for namespace uts : PASS
LXC: Checking for namespace net : PASS
LXC: Checking for namespace user : PASS
LXC: Checking for cgroup 'memory' controller support : PASS
LXC: Checking for cgroup 'memory' controller mount-point : PASS
LXC: Checking for cgroup 'cpu' controller support : PASS
LXC: Checking for cgroup 'cpu' controller mount-point : PASS
LXC: Checking for cgroup 'cpuacct' controller support : PASS
LXC: Checking for cgroup 'cpuacct' controller mount-point : PASS
LXC: Checking for cgroup 'devices' controller support : PASS
LXC: Checking for cgroup 'devices' controller mount-point : PASS
LXC: Checking for cgroup 'net_cls' controller support : PASS
LXC: Checking for cgroup 'net_cls' controller mount-point : PASS
LXC: Checking for cgroup 'freezer' controller support : PASS
LXC: Checking for cgroup 'freezer' controller mount-point : PASS

這些資訊其實透過 sysctl , dmesg 等工具裡面也都可以看到一些蛛絲馬跡,在 unix 的環境裡面往往一件事可能有不只一種作法,只是 dmesg 訊息有時會被 “洗板” ,訊息太多就被蓋掉了,sysctl 的話則是要多背一些位置 … kvm-ok 或是 cpuinfo 我還是覺得簡單一些,透過上面這些訊息,我們基本上就可以了解到所使用的環境是否有適合來拿做虛擬化的應用了,即便是身處在一個已經經過虛擬化的環境裡,例如使用 AWS c或是 DigitalOcean 的 VPS,只要所在環境軟硬體有適當的巢狀虛擬化支援,再多開一層的虛擬機都還是能有不錯的效能,反之,若是沒有適當的軟硬體虛擬化技術支援,即便是運算能力很強的伺服器處理器,只有一層的虛擬機在跑都還是可以處理器使用率不斷滿載、整體速度慢到讓人無法接受,這也就是為什麼要先確認我們需要用到的虛擬化技術支援情形的原因。

Manage Virtualbox virtual machines under command line

VirtualBox is a very useful x86/AMD64 virtualization application, we usually use it to test different operating systems, or do some computer science related practices, or maybe we just want to slice the hardware resource for resource utilization.

I would like to run a virtualbox program on my powerful servers, and use a lightweight but not powerful computer like Chromebook to remotely connect to the virtual machines under the virtualbox, so that I don’t have to bring a heavy computer every where, I can still have multiple machines with several different systems to run different programs.

From the beginning, I use vnc server with x window via ssh tunnel to create a secure connection and then launch virtualbox, in fact, the window manager is not always needed, especially we have installed the operating system, and the system runs properly, so I wonder if is possible to control the virtual machines under command line interface, and the answer is yes, and the command line virtual is much more powerful than I thought, I guess all the tasks and configurations can be done via command line now, of course including create/clone a vm or modify a vm hardware resource, vm import/export, share folder, network interface or usb device attach/detach, etc.

Without controlling via GUI, but from command line, I don’t need to start a vncclient then connect to my vncserver, also don’t need to forward the x window to my client, that’s very helpful, and that machine can run “in the background”, in fact, under the x window by vncserver, note that virtualbox still need a x window environment with virtualbox launched(at least by virtualbox v4.3.34), by my test result, if you don’t have a GUI virtual launched, the startvm command I will talk about later will not work, it will tell you vm started successfully, but in fact not, and will return 1(exit status).

I want to share some basic and commonly used commands, to help us control a virtualbox created virtual machine, if you didn’t know that before, hope this can help you.

Command ‘virtualbox’ usually means the GUI version of VirtualBox, here, for command line, we use ‘vboxmanage’, remember, I use the “vm name” to control a version machine here, you can also use its UUID at the same place, okay, here we go:

List all the virtual machines
– vboxmanage list vms

List the running virtual machines
– vboxmanage list runningvms

List the dhcp server info
– vboxmanage list dhcpservers

Show info about a virtual machine
– vboxmanage showvminfo “vm name”

Power on a vm:
– vboxmanage startvm “vm name”

Force reset/reboot a vm:
– vboxmanage controlvm “vm name” reset

Force power off a vm(cut the power down):
– vboxmanage controlvm “vm name” poweroff

Power off a vm as “Press its power button”(acpi, send power off signal, to power off in normal process, not cut the power down):
– vboxmanage controlvm “vm name” acpipowerbutton

Make a vm sleep as “Press its power button”(acpi, send sleep signal):
– vboxmanage controlvm “vm name” acpisleepbutton

Pause a vm:
– vboxmanage controlvm “vm name” pause

Resume a paused vm:
– vboxmanage controlvm “vm name” resume

Save a vm’s state(like poweroff but all states will be saved):
– vboxmanage controlvm “vm name” savestate

Take a png image screenshot of a vm:
– vboxmanage controlvm “vm name” screenshotpng filename.png

閱讀全文

Virtualization related notes

Guest vs Host:

  • Host – usually runs on physical hardware, lower level.
  • Guest – runs on the virtual/virtualized environment, upper level.

Virtualization types:

  • Full virtualization –  virtualize all the devices!
    • Can run almost all the operating systems without any modifications.
    • Emulate all the devicesn.
    • Slower than Paravirtualization and Operating-system-level virtualization.
    • Software Emulation (Without Hardware-Assisted-Virtualization)
      • Very Slow.
      • Need to do some jobs like binary translation or software instruction decode, will have a heavy overhead, is very inefficiet.
    • Paravirtualization on HVM
      • Full virtualization with paravirtualization drivers.
  • Paravirtualization(PV) – use modified kernel to interact with the special interface
    • Guest knows it’s a guest on the host, the guest will communicate with hypervisor.
    • Use hypercall (call to hypervisor) as its system call.
    • Hard(almost impossible) to modify the kernel of closed-source operating systems like Windows to use this method.
    • Faster than full virtualization but slower than operating-system-level virtualization.
  • Hardware-Assisted-Virtualization (HVM, HAV)
    • Using help from hardware capabilities.
    • Faster than Software Emulation.
    • Technic examples
  • Operating-system-level virtualization – don’t really virtualize the devices
    • Fastest – SUPER FAST!!!
    • Isolate different user space instances.
    • Don’t need hardware support.
    • Must run on the same kernel
      • Means bad compatibility.
    • Examples:
  • Partial virtualization – need be confirmed, not a usual type

Type-1 vs type-2 hypervisor:

Wikipedia also use Application/Environment and OS level to distinguish different types of virtualization:

  • Application-level
    • Sandbox
  • Environment-level
    • Containers
  • OS-level
    • Hypervisors

Common integrated virtualization solutions:

Resources and references:

source : https://blog.xenproject.org/2012/10/31/the-paravirtualization-spectrum-part-2-from-poles-to-a-spectrum/

Still learning, hope that there are not too many wrong things here … comments to point out mistakes/weak points are welcome!

Use Xen Orchestra to manage XenServer from WebUI

Proxmox VE supports Web UI by default, but XenServer
does not, fortunately, there is Xen Orchestra(XOA) to help us manage XenServer via Web UI and support https by default, you can download Xen Orchestra from https://xen-orchestra.com/, they also put the projects on https://github.com/vatesfr.

I just tried the free version of Xen Orchestra, the media is also a template for XenServer, so import it to a XenServer via OpenXenManager or XenCenter, if you wanna use static IP but not dhcp, login and setup its network interface, its default login id/password(for XOA v3.6/3.7) is root/xoa(root/root for XOA v3.5)

after that, you can now visit its panel via your browser, and login, it supports both http and https, the default login id/password is admin@admin.net/admin

Login screenshot:

More screenshots:

XenServer and user setting:

Tree View:

Overview of XenServer Host:

Overview of one of my VM:

Console of VM:

I think Xen Orchestra really makes XenServer more convenient, but it’s not full functional as XenCenter or OpenXenManager.