Capturing x86 CPU diagnostics

Sometime ago I learnt, from Paolo Bonzini (upstream KVM maintainer), about this little debugging utility – x86info (written by Dave Jones) which captures detailed information about CPU diagnostics — TLB, cache sizes, CPU feature flags, model-specific registers, etc. Take a look at its man page for specifics.

Install:

 $  yum install x86info -y 

Run it and capture the output in a file:

 $  x86info -a 2>&1 | tee stdout-x86info.txt  

As part of debugging KVM-based nested virtualization issues, here I captured x86info of L0 (bare metal, Intel Haswell), L1 (guest hypervisor), L2 (nested guest, running on L1).

Leave a comment

Filed under Uncategorized

virt-builder, to trivially create various Linux distribution guest images

I frequently use virt-builder (part of libguestfs-tools package) as part of my work flow.

Rich has extensively documented it, still I felt it’s worth pointing out again of its sheer simplicity.

For instance, if you need to create a Fedora 20 guest of size 100G, and of qcow2 format, it’s as trivial as (no need for root login):

$ virt-builder fedora-20 --format qcow2 --size 100G
[   1.0] Downloading: http://libguestfs.org/download/builder/fedora-20.xz
#######################################################################  100.0%
[ 131.0] Planning how to build this image
[ 131.0] Uncompressing
[ 139.0] Resizing (using virt-resize) to expand the disk to 100.0G
[ 220.0] Opening the new disk
[ 225.0] Setting a random seed
[ 225.0] Setting random root password [did you mean to use --root-password?]
Setting random password of root to N4KkQjZTgdfjjqJJ
[ 225.0] Finishing off
Output: fedora-20.qcow2
Output size: 100.0G
Output format: qcow2
Total usable space: 97.7G
      Free space: 97.0G (99%)

Then, import the just created image:

$ virt-install --name guest-hyp --ram 8192 --vcpus=4 \
  --disk path=/home/test/vmimages/fedora-20.qcow2,format=qcow2,cache=none \
  --import

It provides a serial console for login.

You could also create several other distribution variants – Debian, etc

Leave a comment

Filed under Uncategorized

Script to create Neutron tenant networks

In my two node OpenStack setup (RDO on Fedora 20), I often have to create multiple Neutron tenant networks (here you can read a more on what’s a tenant network) for various testing purposes.

To alleviate this manual process, a trivial script that’ll create a new Neutron tenant network after you provide a few positional parameters in an existing OpenStack setup. This assumes there’s a working OpenStack setup with Neutron configured. I tested this on Neutron + OVS + GRE. This should work with any other Neutron plugins, as tenant networks is a Neutron concept (and not specific to plugins).

Usage:

$ ./create-new-tenant-network.sh                \
                    TENANTNAME USERNAME         \
                    SUBNETSPACE ROUTERNAME      \ 
                    PRIVNETNAME PRIVSUBNETNAME  \

To create a new tenant network with 14.0.0.0/24 subnet:

$ ./create-new-tenant-network.sh \
  demoten1 tuser1                \
  14.0.0.0 trouter1              \
  priv-net1 priv-subnet1

The script does the below, in that order:

  1. Creates a Keystone tenant called demoten1.
  2. Then, a Keystone user called tuser1 and associates it to the
    demoten1.
  3. Creates a Keystone RC file for the user (tuser1) and sources it.
  4. Creates a new private network called priv-net1.
  5. Creates a new private subnet called priv-subnet1 on priv-net1.
  6. Creates a router called trouter1.
  7. Associates the router (trouter1 in this case) to an existing external network (the script assumes it’s called as ext) by setting it as its gateway.
  8. Associates the private network interface (priv-net1) to the router (trouter1).
  9. Adds Neutron security group rules for this test tenant (demoten1) for ICMP and SSH.

To test if it’s all working, try booting a new Nova guest in the tenant network, and it should aquire an IP address from 14.0.0.0/24 subnet.

Posting the relevant part of the script:

[. . .]
# Source the admin credentials
source keystonerc_admin


# Positional parameters
tenantname=$1
username=$2
subnetspace=$3
routername=$4
privnetname=$5
privsubnetname=$6


# Create a tenant, user and associate a role/tenant to it.
keystone tenant-create       \
         --name $tenantname
 
keystone user-create         \
         --name $username    \
         --pass fedora

keystone user-role-add       \
         --user $username    \
         --role user         \
         --tenant $tenantname

# Create an RC file for this user and source the credentials
cat >> keystonerc_$username<<EOF
export OS_USERNAME=$username
export OS_TENANT_NAME=$tenantname
export OS_PASSWORD=fedora
export OS_AUTH_URL=http://localhost:5000/v2.0/
export PS1='[\u@\h \W(keystone_$username)]\$ '
EOF


# Source this user credentials
source keystonerc_$username


# Create new private network, subnet for this user tenant
neutron net-create $privnetname

neutron subnet-create $privnetname \
        $subnetspace/24            \
        --name $privsubnetname     \


# Create a router
neutron router-create $routername


# Associate the router to the external network 
# by setting its gateway.
# NOTE: This assumes, the external network name is 'ext'
EXT_NET=$(neutron net-list     \
| grep ext | awk '{print $2;}')

PRIV_NET=$(neutron subnet-list \
| grep $privsubnetname | awk '{print $2;}')

ROUTER_ID=$(neutron router-list \
| grep $routername | awk '{print $2;}')

neutron router-gateway-set  \
        $ROUTER_ID $EXT_NET \

neutron router-interface-add \
        $ROUTER_ID $PRIV_NET \


# Add Neutron security groups for this test tenant
neutron security-group-rule-create   \
        --protocol icmp              \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

neutron security-group-rule-create   \
        --protocol tcp               \
        --port-range-min 22          \
        --port-range-max 22          \
        --direction ingress          \
        --remote-ip-prefix 0.0.0.0/0 \
        default

NOTE: As shell script is executed in a sub-process (of the parent shell), you won’t notice the keystone sourcing of the newly created user. (You can notice it in the stdout of the script in debug mode.)

If it’s helpful for someone, here’s my Neutron configurations & iptables rules for a two node setup with Neutron + OVS + GRE:

1 Comment

Filed under Uncategorized

OpenStack in Action, Paris (5DEC2013) – quick recap

I decided at the last moment (thanks to Dave Neary for notifying me) to make a quick visit to Paris, for this one day event OpenStack in Action 4.

The conference was very well organized. The day’s agenda was split into high-level keynotes for the first half of the morning, technical and business sessions for rest of the day.

From the morning Keynote sessions, two sessions I attended fully. First, Red Hat’s Paul Cormier’s keynote on Open Clouds. I felt, among non-technical keynotes, content-wise & visually this was very well presented. Second, Thierry Carrez‘s (OpenStack Release Manager) talk on “Havana to Icehouse”. Thierry gave an excellent overview of what was accomplished during Havana release cycle and discussed the work in progress for the upcoming Ice House release.

Among technical sessions, the one that I closely paid attention to was of Mark McCLain’s (Neutron PTL) “From Segments to Services, a Dive into OpenStack Networking”. Mark started with a high-level overview of Neutron Networking, followed by a discussion of its various aspects: architecture of Neutron API; flow of a Neutron API request (originating from Neutron CLI/Horizon Web UI); some of the common features across Neutron plugins — support for overlapping IPs, DHCP, Floating IPs; Neutron Security Groups; Metadata, some advanced services (Load Balancing, Firewall, VPN); Provider Networks. An interesting thing I learnt was about Neutron Module Layer 2 plugin that would combine Open vSwitch and Linux Bridge plugins into a single plugin.

All the talks were recorded, should be on the web soon.

Leave a comment

Filed under Uncategorized

Neutron configs for a two-node OpenStack Havana setup (on Fedora-20)

I managed to prepare a two-node OpenStack Havana setup, hand-configured (URL to notes below). Here are some Neutron configurations that worked for me.

Setup details:

  • Two Fedora 20 minimal (@core) virtual machines to run the Controller & Compute nodes.
  • Services on Controller node: Keystone, Cinder, Glance, Neutron, Nova. Neutron networking is setup with OpenvSwitch plugin, network namespaces, GRE tunneling.
  • Services on Compute node: Nova (openstack-nova-compute service), Neutron (neutron-openvswitch-agent), libvirtd, OpenvSwitch.
  • Both nodes are manually configured. Notes is here.

Configurations

OpenvSwitch plugin configuration — /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini — on Controller node:

$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
[agent]
[securitygroup]
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.122.163
[DATABASE]
sql_connection = mysql://neutron:fedora@vm01-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Neutron configuration — /etc/neutron/neutron.conf:

$ cat neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = localhost
auth_strategy = keystone
ovs_use_veth = True
allow_overlapping_ips = True
qpid_port = 5672
[quotas]
quota_network = 20
quota_subnet = 20
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = 192.168.122.163
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
[database]
[service_providers]

Neutron L3 agent configuration — /etc/neutron/neutron.conf:

$ cat l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
ovs_use_veth = True
use_namespaces = True
metadata_ip = 192.168.122.163
metadata_port = 8700

Neutron metadata agent — /etc/neutron/metadata_agent.ini:

$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://192.168.122.163:35357/v2.0/
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip = 192.168.122.163
nova_metadata_port = 8700
metadata_proxy_shared_secret = fedora

iptables rules on Controller node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 cinder incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 glance incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment --comment "001 keystone incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 mariadb incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 novncproxy incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment "001 novaapi incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 neutron incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 qpid incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT 
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

iptables rules on Compute node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

OpenvSwitch database contents:

$ ovs-vsctl show
6f5d0e33-7013-4816-bc97-29af9abe8309
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap63ea2815-b5"
            tag: 1
            Interface "tap63ea2815-b5"
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "tape7110dba-a9"
            Interface "tape7110dba-a9"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.122.163", out_key=flow, remote_ip="192.168.122.100"}
    ovs_version: "2.0.0"

NOTE: I SCPed the Neutron configurations neutron.conf and OpenvSwitch plugin plugin.ini from Controller to Compute node (don’t miss to replace local_ip attribute appropriately — I made that mistake).

A couple of non-deterministic issues I’m still investigating on a new setup with a non-default libvirt network as external network (on my current setup I used libvirt’s default subnet (192.168.x.x.). Lars pointed out that could probably be the cause of some of the routing issues):

  • Sporadic loss of networking for Nova guests. This got resolved (at-least partially) when I invoke VNC of the guest (via SSH tunneling) & do some basic diagnostics, networking comes up just fine in the guests again (GRE tunnels go stale?).tcmpdump analysis on various network devices (tunnels/bridges/tap devices) on both Controller & Compute nodes in progress.
  • Nova guests fail to acquire DHCP leases (I can clearly observe this, when I explicitly do an ifdown eth0 && ifup eth0 from VNC of the guest. Neutron DHCP agent seems to be flaky here.

TIP: On Fedora, openstack-utils package(from version: openstack-utils-2013.2-2.fc21.noarch) includes a neat utility called openstack-service which allows to trivially control OpenStack services. This makes life much easier, thanks to Lars!

Leave a comment

Filed under Uncategorized

KVMForum, LinuxCon/CloudOpen Eu 2013

KVMForum, CloudOpen, LinuxCon and several other co-located events are starting next week in Edinburgh.

Here’s the schedule information.

For those not able to attend in person for KVMForum, it’s currently being planned to broadcast them using Google Hangouts on Air. And, here’s the G+ page for KVMForum 2013.

Edit: Notes from my presentation (http://sched.co/14wjVpS) on Nested Virt (KVM on KVM) — http://kashyapc.fedorapeople.org/virt/lc-2013/nested-virt-kvm-on-kvm-CloudOpen-Eu-2013-Kashyap-Chamarthy.pdf

Leave a comment

Filed under Uncategorized

Fedora 20 Virtualization Test Day — 8OCT2013

Cole Robinson announced Virtualization test day for Fedora 20.

For convenience, here’s what’s needed to get started.

And, as usual — tests can be performed any day from now and the wiki can be updated with your results/observations. Just that on the test day, more virtualization developers will be available to answer questions, etc on IRC (#fedora-test-day on Freenode).

Leave a comment

Filed under Uncategorized