virt-builder, to trivially create various Linux distribution guest images

I frequently use virt-builder (part of libguestfs-tools package) as part of my work flow.

Rich has extensively documented it, still I felt it’s worth pointing out again of its sheer simplicity.

For instance, if you need to create a Fedora 20 guest of size 100G, and of qcow2 format, it’s as trivial as (no need for root login):

$ virt-builder fedora-20 --format qcow2 --size 100G
[   1.0] Downloading:
#######################################################################  100.0%
[ 131.0] Planning how to build this image
[ 131.0] Uncompressing
[ 139.0] Resizing (using virt-resize) to expand the disk to 100.0G
[ 220.0] Opening the new disk
[ 225.0] Setting a random seed
[ 225.0] Setting random root password [did you mean to use --root-password?]
Setting random password of root to N4KkQjZTgdfjjqJJ
[ 225.0] Finishing off
Output: fedora-20.qcow2
Output size: 100.0G
Output format: qcow2
Total usable space: 97.7G
      Free space: 97.0G (99%)

Then, import the just created image:

$ virt-install --name guest-hyp --ram 8192 --vcpus=4 \
  --disk path=/home/test/vmimages/fedora-20.qcow2,format=qcow2,cache=none \

It provides a serial console for login.

You could also create several other distribution variants – Debian, etc

Leave a comment

Filed under Uncategorized

Script to create Neutron tenant networks

In my two node OpenStack setup (RDO on Fedora 20), I often have to create multiple Neutron tenant networks (here you can read a more on what’s a tenant network) for various testing purposes.

To alleviate this manual process, a trivial script that’ll create a new Neutron tenant network after you provide a few positional parameters in an existing OpenStack setup. This assumes there’s a working OpenStack setup with Neutron configured. I tested this on Neutron + OVS + GRE. This should work with any other Neutron plugins, as tenant networks is a Neutron concept (and not specific to plugins).


$ ./                \
                    TENANTNAME USERNAME         \
                    SUBNETSPACE ROUTERNAME      \ 

To create a new tenant network with subnet:

$ ./ \
  demoten1 tuser1                \ trouter1              \
  priv-net1 priv-subnet1

The script does the below, in that order:

  1. Creates a Keystone tenant called demoten1.
  2. Then, a Keystone user called tuser1 and associates it to the
  3. Creates a Keystone RC file for the user (tuser1) and sources it.
  4. Creates a new private network called priv-net1.
  5. Creates a new private subnet called priv-subnet1 on priv-net1.
  6. Creates a router called trouter1.
  7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it’s called as ext) by setting it as its gateway.
  8. Associates the private network interface (priv-net1) to the router (trouter1).
  9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH.

To test if it’s all working, try booting a new Nova guest in the tenant network, and it should aquire an IP address from subnet.

Posting the relevant part of the script:

[. . .]
# Source the admin credentials
source keystonerc_admin

# Positional parameters

# Create a tenant, user and associate a role/tenant to it.
keystone tenant-create       \
         --name $tenantname
keystone user-create         \
         --name $username    \
         --pass fedora

keystone user-role-add       \
         --user $username    \
         --role user         \
         --tenant $tenantname

# Create an RC file for this user and source the credentials
cat >> keystonerc_$username<<EOF
export OS_USERNAME=$username
export OS_TENANT_NAME=$tenantname
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://localhost:5000/v2.0/
export PS1='[\u@\h \W(keystone_$username)]\$ '

# Source this user credentials
source keystonerc_$username

# Create new private network, subnet for this user tenant
neutron net-create $privnetname

neutron subnet-create $privnetname \
        $subnetspace/24            \
        --name $privsubnetname     \

# Create a router
neutron router-create $routername

# Associate the router to the external network 
# by setting its gateway.
# NOTE: This assumes, the external network name is 'ext'
EXT_NET=$(neutron net-list     \
| grep ext | awk '{print $2;}')

PRIV_NET=$(neutron subnet-list \
| grep $privsubnetname | awk '{print $2;}')

ROUTER_ID=$(neutron router-list \
| grep $routername | awk '{print $2;}')

neutron router-gateway-set  \
        $ROUTER_ID $EXT_NET \

neutron router-interface-add \
        $ROUTER_ID $PRIV_NET \

# Add Neutron security groups for this test tenant
neutron security-group-rule-create   \
        --protocol icmp              \
        --direction ingress          \
        --remote-ip-prefix \

neutron security-group-rule-create   \
        --protocol tcp               \
        --port-range-min 22          \
        --port-range-max 22          \
        --direction ingress          \
        --remote-ip-prefix \

NOTE: As shell script is executed in a sub-process (of the parent shell), you won’t notice the keystone sourcing of the newly created user. (You can notice it in the stdout of the script in debug mode.)

If it’s helpful for someone, here’s my Neutron configurations & iptables rules for a two node setup with Neutron + OVS + GRE:

1 Comment

Filed under Uncategorized

OpenStack in Action, Paris (5DEC2013) – quick recap

I decided at the last moment (thanks to Dave Neary for notifying me) to make a quick visit to Paris, for this one day event OpenStack in Action 4.

The conference was very well organized. The day’s agenda was split into high-level keynotes for the first half of the morning, technical and business sessions for rest of the day.

From the morning Keynote sessions, two sessions I attended fully. First, Red Hat’s Paul Cormier’s keynote on Open Clouds. I felt, among non-technical keynotes, content-wise & visually this was very well presented. Second, Thierry Carrez‘s (OpenStack Release Manager) talk on “Havana to Icehouse”. Thierry gave an excellent overview of what was accomplished during Havana release cycle and discussed the work in progress for the upcoming Ice House release.

Among technical sessions, the one that I closely paid attention to was of Mark McCLain’s (Neutron PTL) “From Segments to Services, a Dive into OpenStack Networking”. Mark started with a high-level overview of Neutron Networking, followed by a discussion of its various aspects: architecture of Neutron API; flow of a Neutron API request (originating from Neutron CLI/Horizon Web UI); some of the common features across Neutron plugins — support for overlapping IPs, DHCP, Floating IPs; Neutron Security Groups; Metadata, some advanced services (Load Balancing, Firewall, VPN); Provider Networks. An interesting thing I learnt was about Neutron Module Layer 2 plugin that would combine Open vSwitch and Linux Bridge plugins into a single plugin.

All the talks were recorded, should be on the web soon.

Leave a comment

Filed under Uncategorized

Neutron configs for a two-node OpenStack Havana setup (on Fedora-20)

I managed to prepare a two-node OpenStack Havana setup, hand-configured (URL to notes below). Here are some Neutron configurations that worked for me.

Setup details:

  • Two Fedora 20 minimal (@core) virtual machines to run the Controller & Compute nodes.
  • Services on Controller node: Keystone, Cinder, Glance, Neutron, Nova. Neutron networking is setup with OpenvSwitch plugin, network namespaces, GRE tunneling.
  • Services on Compute node: Nova (openstack-nova-compute service), Neutron (neutron-openvswitch-agent), libvirtd, OpenvSwitch.
  • Both nodes are manually configured. Notes is here.


OpenvSwitch plugin configuration — /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini — on Controller node:

$ cat plugin.ini | grep -v ^$ | grep -v ^#
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =
sql_connection = mysql://neutron:fedora@vm01-controller/ovs_neutron
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Neutron configuration — /etc/neutron/neutron.conf:

$ cat neutron.conf | grep -v ^$ | grep -v ^#
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = localhost
auth_strategy = keystone
ovs_use_veth = True
allow_overlapping_ips = True
qpid_port = 5672
quota_network = 20
quota_subnet = 20
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
auth_host =
admin_tenant_name = services
admin_user = neutron
admin_password = fedora

Neutron L3 agent configuration — /etc/neutron/neutron.conf:

$ cat l3_agent.ini | grep -v ^$ | grep -v ^#
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
ovs_use_veth = True
use_namespaces = True
metadata_ip =
metadata_port = 8700

Neutron metadata agent — /etc/neutron/metadata_agent.ini:

$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
auth_url =
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip =
nova_metadata_port = 8700
metadata_proxy_shared_secret = fedora

iptables rules on Controller node:

$ cat /etc/sysconfig/iptables
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 cinder incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 glance incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment --comment "001 keystone incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 mariadb incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 novncproxy incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment "001 novaapi incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 neutron incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 qpid incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT 
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

iptables rules on Compute node:

$ cat /etc/sysconfig/iptables
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

OpenvSwitch database contents:

$ ovs-vsctl show
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap63ea2815-b5"
            tag: 1
            Interface "tap63ea2815-b5"
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "tape7110dba-a9"
            Interface "tape7110dba-a9"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="", out_key=flow, remote_ip=""}
    ovs_version: "2.0.0"

NOTE: I SCPed the Neutron configurations neutron.conf and OpenvSwitch plugin plugin.ini from Controller to Compute node (don’t miss to replace local_ip attribute appropriately — I made that mistake).

A couple of non-deterministic issues I’m still investigating on a new setup with a non-default libvirt network as external network (on my current setup I used libvirt’s default subnet (192.168.x.x.). Lars pointed out that could probably be the cause of some of the routing issues):

  • Sporadic loss of networking for Nova guests. This got resolved (at-least partially) when I invoke VNC of the guest (via SSH tunneling) & do some basic diagnostics, networking comes up just fine in the guests again (GRE tunnels go stale?).tcmpdump analysis on various network devices (tunnels/bridges/tap devices) on both Controller & Compute nodes in progress.
  • Nova guests fail to acquire DHCP leases (I can clearly observe this, when I explicitly do an ifdown eth0 && ifup eth0 from VNC of the guest. Neutron DHCP agent seems to be flaky here.

TIP: On Fedora, openstack-utils package(from version: openstack-utils-2013.2-2.fc21.noarch) includes a neat utility called openstack-service which allows to trivially control OpenStack services. This makes life much easier, thanks to Lars!

Leave a comment

Filed under Uncategorized

KVMForum, LinuxCon/CloudOpen Eu 2013

KVMForum, CloudOpen, LinuxCon and several other co-located events are starting next week in Edinburgh.

Here’s the schedule information.

For those not able to attend in person for KVMForum, it’s currently being planned to broadcast them using Google Hangouts on Air. And, here’s the G+ page for KVMForum 2013.

Edit: Notes from my presentation ( on Nested Virt (KVM on KVM) —

Leave a comment

Filed under Uncategorized

Fedora 20 Virtualization Test Day — 8OCT2013

Cole Robinson announced Virtualization test day for Fedora 20.

For convenience, here’s what’s needed to get started.

And, as usual — tests can be performed any day from now and the wiki can be updated with your results/observations. Just that on the test day, more virtualization developers will be available to answer questions, etc on IRC (#fedora-test-day on Freenode).

Leave a comment

Filed under Uncategorized

FLOCK 2013, Retrospective


FLOCK just concluded last week. Given the very short time-frame, the conference was very well organized! (I know what pains it takes from first hand experience volunteering to organize FUDCon Pune, couple of years ago). While not undermining others’ efforts, I couldn’t agree more with Spot — “To put it bluntly, anything at Flock that you liked was probably her handiwork.”,about Ruth. Her super-efficiency shined through everything at FLOCK.

I attempted to write while in the middle of a couple of sessions, but I just couldn’t context switch. (For instance, I have a partial draft that’s started off with “I’m currently in the middle of Miloslav Trma─Ź’s discussion about Fedora Revamp…”)

Here’s my (verbose) summary of how I spent my time at FLOCK.

Talks that I have attended

  • Matthew Miller’s discussion of “cloud”, and should Fedora care?: This was a very high level overview of the topic. For me, main takeaway was Fedora’s cloud SIG’s near term goals — more visibility, better documentation.
  • Crystal Ball talk/discussion by Stephen Gallagher which discussed where Fedora is going for the next five years. All the discussion and notes is here.
  • Kernel Bug triage, Live by Dave Jones. Dave walked us through the process of triaging a bug. And also introduced to some scripts he wrote to manage bugzilla workflow, and related triaging aspects.
  • Fedora Revamp by Miloslv Trmac — This was more of a discussion about how to improve various aspects in Fedora. On a broad level, various topics discussed: Making Rawhide more useable, need for more automated tests, etc. Previous mailing list discussion thread is here
  • What’s new with SELinux, by Dan Walsh — Top off my memory, I only recall a couple of things that I recall from this talk where Dan discussed: New confined domains, new permissive domains, sepolicy tool chain, and what’s upcoming (he mentioned a newer coreutils, with upgraded cp, mv, install, mkdir commands which provide -Z flag. Some context is here
  • Secure Linux Containers, by Dan Walsh: This was one of my faviourite sessions. I was interested to learn a bit more about containers. OpenStack heavily uses Network Namespaces to provide Networking, and I thought this session would give some high-level context, and I wasn’t disappointed. Dan discussed several topics: Application Sandboxes, Linux Containers, different types of Linux Namespaces (Mount, UTS, IPC, Network, PID, User), Cgroups. He then went to elaborate on different types of Containers (and their use cases): Generic Application Container, Systemd Application Container, Chroot Application Container, libvirt-lxc, virt-sandbox, virt-sandbox-service, systemd-nspawn.
  • PKI made easy: Ade Lee gave an overview of PKI, Dogtag and its integration aspects with FreeIPA. I worked with Ade on this project and associated Red Hat products for three about years. It was nice to meet him in person for first time after all these years.
  • Fedora QA Meeting : On Monday (12-AUG),I participated in with Adam Williamson and rest of the Fedora QA team. Video is here. Major topics:
    • ARM release criteria / test matrix adjustments
    • Visible Cloud release criteria / test matrix adjustments.

Among other sessions, I also participated in the “Hack the Future” (of Fedora) with Matthew Miller. I also enjoyed the conference recap discussion with FESCo (Fedora Engineering Steering Committee).

OpenStack Test Event

On day two of FLOCK, I conducted an OpenStack test event. Earlier I blogged about it here. This session wasn’t recorded, as it’s a hands-on test event. We had about 20 participants (capacity of the room was arond 25).

Some notes:

  • Russell Bryant, nova PTL, was in the room, not feeling qualified enough, I made him give a quick 5 minute introduction of OpenStack :-). Later, Jeff Peeler from OpenStack Heat project also gave a brief introduction about Heat and what it does. RDO community manager Rich Bowen was also present and participated in the event.
  • Notes from the test event is here.
  • Russel Bryant (Thank you!) kindly offered to provide temporary access to virtual machines (from RackSpace cloud) for participants who didn’t have enough hardware on their laptop, to quickly test/setup OpenStack. I know of at-least a couple of people who successfully setup using these temporary VM instances.
  • A couple of people hit the bogus “install successfully finished” bug. Clean-up and re-run wasn’t really straightforward in this case.
  • Another participant hit an issue where packstack adds ‘libvirt_type=kvm’ in nova.conf /despite/ the machine not having hardware virtualization extensions. It should ideally add ‘libvirt_type=qemu’, if hardware extensions weren’t found (this should be double checked). And, at-least one person hit Mysql login credential errors (which I sure hit myself on one of my test runs) with an allinone packstack run.

Overall: given the time frame of 2 hours, and the complexity involved with setting up OpenStack, we had decent participation. At-least I know 5-7 people had it configured and running. Thanks to Russel, Jeff Peeler, Sandro Mathys, Rich Bowen for helping and assisting participants during the test event.


These are arbitrary discusssions, notes to self, todos, amusing (to me) snippets from hallway conversations. Let’s see what I can recall.

  • I ran into Luke Macken in the hotel lobby, one of the evenings, we briefly talked about virtualization, and he mentioned he tried PCI passthrough of a sound card with KVM/QEMU, and couldn’t get it working. I said, I’ll try and get to him (Note to self: Add this as 198th item on the TODO list).
  • From discussions with Matthew Miller: we need to switch to Oz from Appliance Creator to generate Fedora Cloud images.
  • Try out Ansible’s OpenStack deployer tool.
  • Had an interesting hallway chat with Bill Nottingham, Miloslav Trmac, in the relaxed environment of the Charleston Aquarium (do /not/ miss this mesmerizing place if you’re in Charleston!). Topics: Various Cloud technologies, mailing list etiquette (the much discussed recent LKML epic thread about conflicts, strong language to convey your points, etc.)
  • Books: On day-1 evening dinner, Paul Frields mentioned ‘Forever war’ as one of his favorites and highly recommended it. Next day, during the evening event, at the Mynt bar, Greg DekonigsBerg said, “anything & everything by Jim Collins”. The same evening, while still discussing books, Tom Callaway said “no, that’s not the right book” (or something along the lines) when I mentioned ‘Elements of Style’. I don’t know what was his reasoning was, I liked the book anyway. :-)
  • I learnt interesting details about life in Czech Republic from conversations with Jan Zeleny.
  • A lot of little/long conversations with Adam Williamson (Fedora QA Czar), Robyn Bergeron Ruth Suehle, Christopher Wickert, Rahul Sundaram, Cole Robinson, Toshio Kuratomi, and all others I missed to name here.
  • Thanks Toshio for the delicious Peaches! (He carried that large box of Peaches in the cabin luggage.) Also thank you for the nice conversation on last Tuesday, and taking us to the breakfast place Kitchen, on King’s street.
  • Also, I tried Google Glass from Jon Masters. But it wasn’t quite intuitive, as I have prescription glasses already.

Leave a comment

Filed under Uncategorized