Notes for building KVM-based virtualization components from upstream git

I frequently need to have latest KVM, QEMU, libvirt and libguestfs while testing with OpenStack RDO. I either build from upstream git master branch or from Fedora Rawhide (mostly this suffices). Below I describe the exact sequence I try to build from git. These instructions are available in some form in the README files of the said packages, just noting them here explicitly for convenience. My primary development/test environment is Fedora, but it should be similar on other distributions. (Maybe I should just script it all.)

Build KVM from git

I think it’s worth noting the distinction (from traditional master branch) of these KVM git branches: remotes/origin/queue and remotes/origin/next. queue and next branches are same most of the time with the distinction that KVM queue is the branch where patches are usually tested before moving them to the KVM next branch. And, commits from next branch are submitted (as a PULL request) to Linus during the next Kernel merge window. (I recall this from an old conversation with Gleb Natapov (thank you), one of the previous KVM maintainers on IRC).

# Clone the repo
$ git clone \
  git://git.kernel.org/pub/scm/virt/kvm/kvm.git

# To test out of tree patches,
# it's cleaner to do in a new branch
$ git checkout -b test_branch

# Make a config file
$ make defconfig

# Compile
$ make -j4 && make bzImage && make modules

# Install
$ sudo -i
$ make modules_install && make install

Build QEMU from git

To build QEMU (only x86_64 target) from its git:

# Install buid dependencies of QEMU
$ yum-builddep qemu

# Clone the repo
$ git clone git://git.qemu.org/qemu.git

# Create a build directory to isolate source directory 
# from build directory
$ mkdir -p ~/build/qemu && cd ~/build/qemu

# Run the configure script
$ ~/src/qemu/./configure --target-list=x86_64-softmmu \
  --disable-werror --enable-debug 

# Compile
$ make -j4

I previously discussed about QEMU building here.

Build libvirt from git

To build libvirt from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libvirt

# Clone the libvirt repo
$ git clone git://libvirt.org/libvirt.git && cd libvirt

# Create a build directory to isolate source directory
# from build directory
$ mkdir -p ~/build/libvirt && cd ~/build/libvirt

# Run the autogen script
$ ../src/libvirt/autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libvirt programs without having to install them
$ ./run tools/virsh [. . .]

[Or, prepare RPMs and install them]

# Make RPMs (assumes Fedora `rpmbuild` setup
# is properly configured)
$ make rpm

# Install/update
$ yum update *.rpm

Build libguestfs from git
To build libguestfs from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libguestfs

# Clone the libguestfs repo
$ git clone git://github.com/libguestfs/libguestfs.git \
   && cd libguestfs

# Run the autogen script
$ ./autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libguestfs programs without having to install them
$ ./run guestfish [. . .]

If you’d rather prefer libguestfs to use the custom QEMU built from git (as noted above), QEMU wrappers are useful in this case.

Alternate to building from upstream git, if you’d prefer to build the above components locally from Fedora master here are some instructions .

1 Comment

Filed under Uncategorized

Create a minimal, automated Btrfs Fedora guest template with Oz

I needed to create an automated Btrfs guest for a test. A trivial virt-install based automated script couldn’t complete the guest install (on a Fedora Rawhide host) — it hung perpetually once it tries to retrieve .treeinfo, vmlinuz, initrd.img. On my cursory investigation with guestfish, I couldn’t pin-point the root cause. Filed a bug here

[Edit, 31JAN2014: The above is fixed by adding --console=pty to virt-install command-line; refer the above bug for discussion.]

In case someone wants to reproduce with versions I noted in the above bug:

  $ wget http://kashyapc.fedorapeople.org/virt/create-guest-qcow2-btrfs.bash
  $ chmod +x create-guest-qcow2-btrfs.bash
  $ ./create-guest-qcow2-btrfs.bash fed-btrfs2 f20

Oz
So, I resorted to Oz – a utility to create automated installs of various Linux distributions. I previously wrote about it here

Below I describe a way to use it to create a minimal, automated Btrfs Fedora 20 guest template.

Install it:

 $ yum install oz -y 

A minimal Kickstart file with Btrfs partitioning:

$ cat Fedora20-btrfs.auto
install
text
keyboard us
lang en_US.UTF-8
network --device eth0 --bootproto dhcp
rootpw fedora
firewall --enabled ssh
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr
clearpart --all --drives=vda
autopart --type=btrfs
reboot

%packages
@core
%end

A TDL (Template Description Language) file that Oz needs as input:

$ cat f20.tdl
<template>
  <name>f20btrfs</name>
  <os>
    <name>Fedora</name>
    <version>20</version>
    <arch>x86_64</arch>
    <install type='url'>
      <url>http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/</url>
    </install>
    <rootpw>fedora</rootpw>
  </os>
  <description>Fedora 20</description>
</template>

Invoke oz-install (supply the Kickstart & TDL files, turn on debugging to level 4 — all details):

$ oz-install -a Fedora20-btrfs.auto -d 4 f20.tdl 
[. . .]
INFO:oz.Guest.FedoraGuest:Cleaning up after install
Libvirt XML was written to f20btrfsJan_29_2014-11:38:44

Define the above Libvirt guest XML, start it over a serial console:

$  virsh define f20btrfsJan_29_2014-11:38:44
Domain f20btrfs defined from f20btrfsJan_29_2014-11:38:44

$ virsh start f20btrfs --console
[. . .]
Connected to domain f20btrfs
Escape character is ^]

Fedora release 20 (Heisenbug)
Kernel 3.11.10-301.fc20.x86_64 on an x86_64 (ttyS0)

localhost login: 

It’s automated, fine, but still slightly more tedious (TDL file looks redundant at this point) to create custom guest image templates.

Leave a comment

Filed under Uncategorized

Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Leave a comment

Filed under Uncategorized

virt-builder, to trivially create various Linux distribution guest images

I frequently use virt-builder (part of libguestfs-tools package) as part of my work flow.

Rich has extensively documented it, still I felt it’s worth pointing out again of its sheer simplicity.

For instance, if you need to create a Fedora 20 guest of size 100G, and of qcow2 format, it’s as trivial as (no need for root login):

$ virt-builder fedora-20 --format qcow2 --size 100G
[   1.0] Downloading: http://libguestfs.org/download/builder/fedora-20.xz
#######################################################################  100.0%
[ 131.0] Planning how to build this image
[ 131.0] Uncompressing
[ 139.0] Resizing (using virt-resize) to expand the disk to 100.0G
[ 220.0] Opening the new disk
[ 225.0] Setting a random seed
[ 225.0] Setting random root password [did you mean to use --root-password?]
Setting random password of root to N4KkQjZTgdfjjqJJ
[ 225.0] Finishing off
Output: fedora-20.qcow2
Output size: 100.0G
Output format: qcow2
Total usable space: 97.7G
      Free space: 97.0G (99%)

Then, import the just created image:

$ virt-install --name guest-hyp --ram 8192 --vcpus=4 \
  --disk path=/home/test/vmimages/fedora-20.qcow2,format=qcow2,cache=none \
  --import

It provides a serial console for login.

You could also create several other distribution variants – Debian, etc

Leave a comment

Filed under Uncategorized

Script to create Neutron tenant networks

In my two node OpenStack setup (RDO on Fedora 20), I often have to create multiple Neutron tenant networks (here you can read a more on what’s a tenant network) for various testing purposes.

To alleviate this manual process, a trivial script that’ll create a new Neutron tenant network after you provide a few positional parameters in an existing OpenStack setup. This assumes there’s a working OpenStack setup with Neutron configured. I tested this on Neutron + OVS + GRE. This should work with any other Neutron plugins, as tenant networks is a Neutron concept (and not specific to plugins).

Usage:

$ ./create-new-tenant-network.sh                \
                    TENANTNAME USERNAME         \
                    SUBNETSPACE ROUTERNAME      \ 
                    PRIVNETNAME PRIVSUBNETNAME  \

To create a new tenant network with 14.0.0.0/24 subnet:

$ ./create-new-tenant-network.sh \
  demoten1 tuser1                \
  14.0.0.0 trouter1              \
  priv-net1 priv-subnet1

The script does the below, in that order:

  1. Creates a Keystone tenant called demoten1.
  2. Then, a Keystone user called tuser1 and associates it to the
    demoten1.
  3. Creates a Keystone RC file for the user (tuser1) and sources it.
  4. Creates a new private network called priv-net1.
  5. Creates a new private subnet called priv-subnet1 on priv-net1.
  6. Creates a router called trouter1.
  7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it’s called as ext) by setting it as its gateway.
  8. Associates the private network interface (priv-net1) to the router (trouter1).
  9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH.

To test if it’s all working, try booting a new Nova guest in the tenant network, and it should aquire an IP address from 14.0.0.0/24 subnet.

Posting the relevant part of the script:

[. . .]
# Source the admin credentials
source keystonerc_admin


# Positional parameters
tenantname=$1
username=$2
subnetspace=$3
routername=$4
privnetname=$5
privsubnetname=$6


# Create a tenant, user and associate a role/tenant to it.
keystone tenant-create       \
         --name $tenantname
 
keystone user-create         \
         --name $username    \
         --pass fedora

keystone user-role-add       \
         --user $username    \
         --role user         \
         --tenant $tenantname

# Create an RC file for this user and source the credentials
cat >> keystonerc_$username<<EOF
export OS_USERNAME=$username
export OS_TENANT_NAME=$tenantname
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://localhost:5000/v2.0/
export PS1='[\u@\h \W(keystone_$username)]\$ '
EOF


# Source this user credentials
source keystonerc_$username


# Create new private network, subnet for this user tenant
neutron net-create $privnetname

neutron subnet-create $privnetname \
        $subnetspace/24            \
        --name $privsubnetname     \


# Create a router
neutron router-create $routername


# Associate the router to the external network 
# by setting its gateway.
# NOTE: This assumes, the external network name is 'ext'
EXT_NET=$(neutron net-list     \
| grep ext | awk '{print $2;}')

PRIV_NET=$(neutron subnet-list \
| grep $privsubnetname | awk '{print $2;}')

ROUTER_ID=$(neutron router-list \
| grep $routername | awk '{print $2;}')

neutron router-gateway-set  \
        $ROUTER_ID $EXT_NET \

neutron router-interface-add \
        $ROUTER_ID $PRIV_NET \


# Add Neutron security groups for this test tenant
neutron security-group-rule-create   \
        --protocol icmp              \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

neutron security-group-rule-create   \
        --protocol tcp               \
        --port-range-min 22          \
        --port-range-max 22          \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

NOTE: As shell script is executed in a sub-process (of the parent shell), you won’t notice the keystone sourcing of the newly created user. (You can notice it in the stdout of the script in debug mode.)

If it’s helpful for someone, here’s my Neutron configurations & iptables rules for a two node setup with Neutron + OVS + GRE:

1 Comment

Filed under Uncategorized

OpenStack in Action, Paris (5DEC2013) – quick recap

I decided at the last moment (thanks to Dave Neary for notifying me) to make a quick visit to Paris, for this one day event OpenStack in Action 4.

The conference was very well organized. The day’s agenda was split into high-level keynotes for the first half of the morning, technical and business sessions for rest of the day.

From the morning Keynote sessions, two sessions I attended fully. First, Red Hat’s Paul Cormier’s keynote on Open Clouds. I felt, among non-technical keynotes, content-wise & visually this was very well presented. Second, Thierry Carrez‘s (OpenStack Release Manager) talk on “Havana to Icehouse”. Thierry gave an excellent overview of what was accomplished during Havana release cycle and discussed the work in progress for the upcoming Ice House release.

Among technical sessions, the one that I closely paid attention to was of Mark McCLain’s (Neutron PTL) “From Segments to Services, a Dive into OpenStack Networking”. Mark started with a high-level overview of Neutron Networking, followed by a discussion of its various aspects: architecture of Neutron API; flow of a Neutron API request (originating from Neutron CLI/Horizon Web UI); some of the common features across Neutron plugins — support for overlapping IPs, DHCP, Floating IPs; Neutron Security Groups; Metadata, some advanced services (Load Balancing, Firewall, VPN); Provider Networks. An interesting thing I learnt was about Neutron Module Layer 2 plugin that would combine Open vSwitch and Linux Bridge plugins into a single plugin.

All the talks were recorded, should be on the web soon.

Leave a comment

Filed under Uncategorized

Neutron configs for a two-node OpenStack Havana setup (on Fedora-20)

I managed to prepare a two-node OpenStack Havana setup, hand-configured (URL to notes below). Here are some Neutron configurations that worked for me.

Setup details:

  • Two Fedora 20 minimal (@core) virtual machines to run the Controller & Compute nodes.
  • Services on Controller node: Keystone, Cinder, Glance, Neutron, Nova. Neutron networking is setup with OpenvSwitch plugin, network namespaces, GRE tunneling.
  • Services on Compute node: Nova (openstack-nova-compute service), Neutron (neutron-openvswitch-agent), libvirtd, OpenvSwitch.
  • Both nodes are manually configured. Notes is here.

Configurations

OpenvSwitch plugin configuration — /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini — on Controller node:

$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
[agent]
[securitygroup]
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.122.163
[DATABASE]
sql_connection = mysql://neutron:fedora@vm01-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Neutron configuration — /etc/neutron/neutron.conf:

$ cat neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = localhost
auth_strategy = keystone
ovs_use_veth = True
allow_overlapping_ips = True
qpid_port = 5672
[quotas]
quota_network = 20
quota_subnet = 20
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = 192.168.122.163
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
[database]
[service_providers]

Neutron L3 agent configuration — /etc/neutron/neutron.conf:

$ cat l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
ovs_use_veth = True
use_namespaces = True
metadata_ip = 192.168.122.163
metadata_port = 8700

Neutron metadata agent — /etc/neutron/metadata_agent.ini:

$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://192.168.122.163:35357/v2.0/
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip = 192.168.122.163
nova_metadata_port = 8700
metadata_proxy_shared_secret = fedora

iptables rules on Controller node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 cinder incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 glance incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment --comment "001 keystone incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 mariadb incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 novncproxy incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment "001 novaapi incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 neutron incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 qpid incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT 
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

iptables rules on Compute node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

OpenvSwitch database contents:

$ ovs-vsctl show
6f5d0e33-7013-4816-bc97-29af9abe8309
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap63ea2815-b5"
            tag: 1
            Interface "tap63ea2815-b5"
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "tape7110dba-a9"
            Interface "tape7110dba-a9"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.122.163", out_key=flow, remote_ip="192.168.122.100"}
    ovs_version: "2.0.0"

NOTE: I SCPed the Neutron configurations neutron.conf and OpenvSwitch plugin plugin.ini from Controller to Compute node (don’t miss to replace local_ip attribute appropriately — I made that mistake).

A couple of non-deterministic issues I’m still investigating on a new setup with a non-default libvirt network as external network (on my current setup I used libvirt’s default subnet (192.168.x.x.). Lars pointed out that could probably be the cause of some of the routing issues):

  • Sporadic loss of networking for Nova guests. This got resolved (at-least partially) when I invoke VNC of the guest (via SSH tunneling) & do some basic diagnostics, networking comes up just fine in the guests again (GRE tunnels go stale?).tcmpdump analysis on various network devices (tunnels/bridges/tap devices) on both Controller & Compute nodes in progress.
  • Nova guests fail to acquire DHCP leases (I can clearly observe this, when I explicitly do an ifdown eth0 && ifup eth0 from VNC of the guest. Neutron DHCP agent seems to be flaky here.

TIP: On Fedora, openstack-utils package(from version: openstack-utils-2013.2-2.fc21.noarch) includes a neat utility called openstack-service which allows to trivially control OpenStack services. This makes life much easier, thanks to Lars!

Leave a comment

Filed under Uncategorized