Create a minimal Fedora Rawhide guest

Create a Fedora 20 guest from scratch using virt-builder , fully update it, and install a couple of packages:

$ virt-builder fedora-20 -o rawhide.qcow2 --format qcow2 \
  --update --selinux-relabel --size 40G\
  --install "fedora-release-rawhide yum-utils"

[UPDATE: Starting Fedora-21, fedora-release-rawhide package is renamed by fedora-repos-rawhide.]

Import the disk image into libvirt:

$ virt-install --name rawhide --ram 4096 --disk \
  path=/home/kashyapc/rawhide.qcow2,format=qcow2,cache=none \
  --import

Login via serial console into the guest, upgrade to Rawhide:

$ yum-config-manager --disable fedora updates updates-testing
$ yum-config-manager --enable rawhide
$ yum --releasever=rawhide distro-sync --nogpgcheck
$ reboot

Optionally, create a QCOW2 internal snapshot (live or offline) of the guest:

$ virsh snapshot-create-as rawhide snap1 \
  "Clean Rawhide 19MAR2014"

Here are a couple of methods on how to upgrade to Rawhide

Leave a comment

Filed under Uncategorized

Notes for building KVM-based virtualization components from upstream git

I frequently need to have latest KVM, QEMU, libvirt and libguestfs while testing with OpenStack RDO. I either build from upstream git master branch or from Fedora Rawhide (mostly this suffices). Below I describe the exact sequence I try to build from git. These instructions are available in some form in the README files of the said packages, just noting them here explicitly for convenience. My primary development/test environment is Fedora, but it should be similar on other distributions. (Maybe I should just script it all.)

Build KVM from git

I think it’s worth noting the distinction (from traditional master branch) of these KVM git branches: remotes/origin/queue and remotes/origin/next. queue and next branches are same most of the time with the distinction that KVM queue is the branch where patches are usually tested before moving them to the KVM next branch. And, commits from next branch are submitted (as a PULL request) to Linus during the next Kernel merge window. (I recall this from an old conversation with Gleb Natapov (thank you), one of the previous KVM maintainers on IRC).

# Clone the repo
$ git clone \
  git://git.kernel.org/pub/scm/virt/kvm/kvm.git

# To test out of tree patches,
# it's cleaner to do in a new branch
$ git checkout -b test_branch

# Make a config file
$ make defconfig

# Compile
$ make -j4 && make bzImage && make modules

# Install
$ sudo -i
$ make modules_install && make install

Build QEMU from git

To build QEMU (only x86_64 target) from its git:

# Install buid dependencies of QEMU
$ yum-builddep qemu

# Clone the repo
$ git clone git://git.qemu.org/qemu.git

# Create a build directory to isolate source directory 
# from build directory
$ mkdir -p ~/build/qemu && cd ~/build/qemu

# Run the configure script
$ ~/src/qemu/./configure --target-list=x86_64-softmmu \
  --disable-werror --enable-debug 

# Compile
$ make -j4

I previously discussed about QEMU building here.

Build libvirt from git

To build libvirt from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libvirt

# Clone the libvirt repo
$ git clone git://libvirt.org/libvirt.git && cd libvirt

# Create a build directory to isolate source directory
# from build directory
$ mkdir -p ~/build/libvirt && cd ~/build/libvirt

# Run the autogen script
$ ../src/libvirt/autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libvirt programs without having to install them
$ ./run tools/virsh [. . .]

[Or, prepare RPMs and install them]

# Make RPMs (assumes Fedora `rpmbuild` setup
# is properly configured)
$ make rpm

# Install/update
$ yum update *.rpm

Build libguestfs from git
To build libguestfs from its upstream git:

# Install build dependencies of libvirt
$ yum-builddep libguestfs

# Clone the libguestfs repo
$ git clone git://github.com/libguestfs/libguestfs.git \
   && cd libguestfs

# Run the autogen script
$ ./autogen.sh

# Compile
$ make -j4

# Run tests
$ make check

# Invoke libguestfs programs without having to install them
$ ./run guestfish [. . .]

If you’d rather prefer libguestfs to use the custom QEMU built from git (as noted above), QEMU wrappers are useful in this case.

Alternate to building from upstream git, if you’d prefer to build the above components locally from Fedora master here are some instructions .

1 Comment

Filed under Uncategorized

Create a minimal, automated Btrfs Fedora guest template with Oz

I needed to create an automated Btrfs guest for a test. A trivial virt-install based automated script couldn’t complete the guest install (on a Fedora Rawhide host) — it hung perpetually once it tries to retrieve .treeinfo, vmlinuz, initrd.img. On my cursory investigation with guestfish, I couldn’t pin-point the root cause. Filed a bug here

[Edit, 31JAN2014: The above is fixed by adding –console=pty to virt-install command-line; refer the above bug for discussion.]

In case someone wants to reproduce with versions I noted in the above bug:

  $ wget http://kashyapc.fedorapeople.org/virt/create-guest-qcow2-btrfs.bash
  $ chmod +x create-guest-qcow2-btrfs.bash
  $ ./create-guest-qcow2-btrfs.bash fed-btrfs2 f20

Oz
So, I resorted to Oz – a utility to create automated installs of various Linux distributions. I previously wrote about it here

Below I describe a way to use it to create a minimal, automated Btrfs Fedora 20 guest template.

Install it:

 $ yum install oz -y 

A minimal Kickstart file with Btrfs partitioning:

$ cat Fedora20-btrfs.auto
install
text
keyboard us
lang en_US.UTF-8
network --device eth0 --bootproto dhcp
rootpw fedora
firewall --enabled ssh
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --append="console=tty0 console=ttyS0,115200"
zerombr
clearpart --all --drives=vda
autopart --type=btrfs
reboot

%packages
@core
%end

A TDL (Template Description Language) file that Oz needs as input:

$ cat f20.tdl
<template>
  <name>f20btrfs</name>
  <os>
    <name>Fedora</name>
    <version>20</version>
    <arch>x86_64</arch>
    <install type='url'>
      <url>http://dl.fedoraproject.org/pub/fedora/linux/releases/20/Fedora/x86_64/os/</url>
    </install>
    <rootpw>fedora</rootpw>
  </os>
  <description>Fedora 20</description>
</template>

Invoke oz-install (supply the Kickstart & TDL files, turn on debugging to level 4 — all details):

$ oz-install -a Fedora20-btrfs.auto -d 4 f20.tdl 
[. . .]
INFO:oz.Guest.FedoraGuest:Cleaning up after install
Libvirt XML was written to f20btrfsJan_29_2014-11:38:44

Define the above Libvirt guest XML, start it over a serial console:

$  virsh define f20btrfsJan_29_2014-11:38:44
Domain f20btrfs defined from f20btrfsJan_29_2014-11:38:44

$ virsh start f20btrfs --console
[. . .]
Connected to domain f20btrfs
Escape character is ^]

Fedora release 20 (Heisenbug)
Kernel 3.11.10-301.fc20.x86_64 on an x86_64 (ttyS0)

localhost login: 

It’s automated, fine, but still slightly more tedious (TDL file looks redundant at this point) to create custom guest image templates.

Leave a comment

Filed under Uncategorized

Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Leave a comment

Filed under Uncategorized

virt-builder, to trivially create various Linux distribution guest images

I frequently use virt-builder (part of libguestfs-tools package) as part of my work flow.

Rich has extensively documented it, still I felt it’s worth pointing out again of its sheer simplicity.

For instance, if you need to create a Fedora 20 guest of size 100G, and of qcow2 format, it’s as trivial as (no need for root login):

$ virt-builder fedora-20 --format qcow2 --size 100G
[   1.0] Downloading: http://libguestfs.org/download/builder/fedora-20.xz
#######################################################################  100.0%
[ 131.0] Planning how to build this image
[ 131.0] Uncompressing
[ 139.0] Resizing (using virt-resize) to expand the disk to 100.0G
[ 220.0] Opening the new disk
[ 225.0] Setting a random seed
[ 225.0] Setting random root password [did you mean to use --root-password?]
Setting random password of root to N4KkQjZTgdfjjqJJ
[ 225.0] Finishing off
Output: fedora-20.qcow2
Output size: 100.0G
Output format: qcow2
Total usable space: 97.7G
      Free space: 97.0G (99%)

Then, import the just created image:

$ virt-install --name guest-hyp --ram 8192 --vcpus=4 \
  --disk path=/home/test/vmimages/fedora-20.qcow2,format=qcow2,cache=none \
  --import

It provides a serial console for login.

You could also create several other distribution variants – Debian, etc

Leave a comment

Filed under Uncategorized

Script to create Neutron tenant networks

In my two node OpenStack setup (RDO on Fedora 20), I often have to create multiple Neutron tenant networks (here you can read a more on what’s a tenant network) for various testing purposes.

To alleviate this manual process, a trivial script that’ll create a new Neutron tenant network after you provide a few positional parameters in an existing OpenStack setup. This assumes there’s a working OpenStack setup with Neutron configured. I tested this on Neutron + OVS + GRE. This should work with any other Neutron plugins, as tenant networks is a Neutron concept (and not specific to plugins).

Usage:

$ ./create-new-tenant-network.sh                \
                    TENANTNAME USERNAME         \
                    SUBNETSPACE ROUTERNAME      \ 
                    PRIVNETNAME PRIVSUBNETNAME  \

To create a new tenant network with 14.0.0.0/24 subnet:

$ ./create-new-tenant-network.sh \
  demoten1 tuser1                \
  14.0.0.0 trouter1              \
  priv-net1 priv-subnet1

The script does the below, in that order:

  1. Creates a Keystone tenant called demoten1.
  2. Then, a Keystone user called tuser1 and associates it to the
    demoten1.
  3. Creates a Keystone RC file for the user (tuser1) and sources it.
  4. Creates a new private network called priv-net1.
  5. Creates a new private subnet called priv-subnet1 on priv-net1.
  6. Creates a router called trouter1.
  7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it’s called as ext) by setting it as its gateway.
  8. Associates the private network interface (priv-net1) to the router (trouter1).
  9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH.

To test if it’s all working, try booting a new Nova guest in the tenant network, and it should aquire an IP address from 14.0.0.0/24 subnet.

Posting the relevant part of the script:

[. . .]
# Source the admin credentials
source keystonerc_admin


# Positional parameters
tenantname=$1
username=$2
subnetspace=$3
routername=$4
privnetname=$5
privsubnetname=$6


# Create a tenant, user and associate a role/tenant to it.
keystone tenant-create       \
         --name $tenantname
 
keystone user-create         \
         --name $username    \
         --pass fedora

keystone user-role-add       \
         --user $username    \
         --role user         \
         --tenant $tenantname

# Create an RC file for this user and source the credentials
cat >> keystonerc_$username<<EOF
export OS_USERNAME=$username
export OS_TENANT_NAME=$tenantname
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://localhost:5000/v2.0/
export PS1='[\u@\h \W(keystone_$username)]\$ '
EOF


# Source this user credentials
source keystonerc_$username


# Create new private network, subnet for this user tenant
neutron net-create $privnetname

neutron subnet-create $privnetname \
        $subnetspace/24            \
        --name $privsubnetname     \


# Create a router
neutron router-create $routername


# Associate the router to the external network 
# by setting its gateway.
# NOTE: This assumes, the external network name is 'ext'
EXT_NET=$(neutron net-list     \
| grep ext | awk '{print $2;}')

PRIV_NET=$(neutron subnet-list \
| grep $privsubnetname | awk '{print $2;}')

ROUTER_ID=$(neutron router-list \
| grep $routername | awk '{print $2;}')

neutron router-gateway-set  \
        $ROUTER_ID $EXT_NET \

neutron router-interface-add \
        $ROUTER_ID $PRIV_NET \


# Add Neutron security groups for this test tenant
neutron security-group-rule-create   \
        --protocol icmp              \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

neutron security-group-rule-create   \
        --protocol tcp               \
        --port-range-min 22          \
        --port-range-max 22          \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

NOTE: As shell script is executed in a sub-process (of the parent shell), you won’t notice the keystone sourcing of the newly created user. (You can notice it in the stdout of the script in debug mode.)

If it’s helpful for someone, here’s my Neutron configurations & iptables rules for a two node setup with Neutron + OVS + GRE:

1 Comment

Filed under Uncategorized

OpenStack in Action, Paris (5DEC2013) – quick recap

I decided at the last moment (thanks to Dave Neary for notifying me) to make a quick visit to Paris, for this one day event OpenStack in Action 4.

The conference was very well organized. The day’s agenda was split into high-level keynotes for the first half of the morning, technical and business sessions for rest of the day.

From the morning Keynote sessions, two sessions I attended fully. First, Red Hat’s Paul Cormier’s keynote on Open Clouds. I felt, among non-technical keynotes, content-wise & visually this was very well presented. Second, Thierry Carrez‘s (OpenStack Release Manager) talk on “Havana to Icehouse”. Thierry gave an excellent overview of what was accomplished during Havana release cycle and discussed the work in progress for the upcoming Ice House release.

Among technical sessions, the one that I closely paid attention to was of Mark McCLain’s (Neutron PTL) “From Segments to Services, a Dive into OpenStack Networking”. Mark started with a high-level overview of Neutron Networking, followed by a discussion of its various aspects: architecture of Neutron API; flow of a Neutron API request (originating from Neutron CLI/Horizon Web UI); some of the common features across Neutron plugins — support for overlapping IPs, DHCP, Floating IPs; Neutron Security Groups; Metadata, some advanced services (Load Balancing, Firewall, VPN); Provider Networks. An interesting thing I learnt was about Neutron Module Layer 2 plugin that would combine Open vSwitch and Linux Bridge plugins into a single plugin.

All the talks were recorded, should be on the web soon.

Leave a comment

Filed under Uncategorized