Author Archives: kashyapc

Neutron configs for a two-node OpenStack Havana setup (on Fedora-20)

I managed to prepare a two-node OpenStack Havana setup, hand-configured (URL to notes below). Here are some Neutron configurations that worked for me.

Setup details:

  • Two Fedora 20 minimal (@core) virtual machines to run the Controller & Compute nodes.
  • Services on Controller node: Keystone, Cinder, Glance, Neutron, Nova. Neutron networking is setup with OpenvSwitch plugin, network namespaces, GRE tunneling.
  • Services on Compute node: Nova (openstack-nova-compute service), Neutron (neutron-openvswitch-agent), libvirtd, OpenvSwitch.
  • Both nodes are manually configured. Notes is here.

Configurations

OpenvSwitch plugin configuration — /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini — on Controller node:

$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
[agent]
[securitygroup]
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.122.163
[DATABASE]
sql_connection = mysql://neutron:fedora@vm01-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Neutron configuration — /etc/neutron/neutron.conf:

$ cat neutron.conf | grep -v ^$ | grep -v ^#
[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = localhost
auth_strategy = keystone
ovs_use_veth = True
allow_overlapping_ips = True
qpid_port = 5672
[quotas]
quota_network = 20
quota_subnet = 20
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[keystone_authtoken]
auth_host = 192.168.122.163
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
[database]
[service_providers]

Neutron L3 agent configuration — /etc/neutron/neutron.conf:

$ cat l3_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
ovs_use_veth = True
use_namespaces = True
metadata_ip = 192.168.122.163
metadata_port = 8700

Neutron metadata agent — /etc/neutron/metadata_agent.ini:

$ cat metadata_agent.ini | grep -v ^$ | grep -v ^#
[DEFAULT]
auth_url = http://192.168.122.163:35357/v2.0/
auth_region = regionOne
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
nova_metadata_ip = 192.168.122.163
nova_metadata_port = 8700
metadata_proxy_shared_secret = fedora

iptables rules on Controller node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m multiport --dports 3260 -m comment --comment "001 cinder incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 80 -m comment --comment "001 horizon incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9292 -m comment --comment "001 glance incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5000,35357 -m comment --comment "001 keystone incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 3306 -m comment --comment "001 mariadb incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "001 novncproxy incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8770:8780 -m comment --comment "001 novaapi incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 9696 -m comment --comment "001 neutron incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 5672 -m comment --comment "001 qpid incoming" -j ACCEPT 
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT 
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -p gre -j ACCEPT 
-A OUTPUT -p gre -j ACCEPT
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

iptables rules on Compute node:

$ cat /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 5900:5999 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

OpenvSwitch database contents:

$ ovs-vsctl show
6f5d0e33-7013-4816-bc97-29af9abe8309
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap63ea2815-b5"
            tag: 1
            Interface "tap63ea2815-b5"
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "tape7110dba-a9"
            Interface "tape7110dba-a9"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.122.163", out_key=flow, remote_ip="192.168.122.100"}
    ovs_version: "2.0.0"

NOTE: I SCPed the Neutron configurations neutron.conf and OpenvSwitch plugin plugin.ini from Controller to Compute node (don’t miss to replace local_ip attribute appropriately — I made that mistake).

A couple of non-deterministic issues I’m still investigating on a new setup with a non-default libvirt network as external network (on my current setup I used libvirt’s default subnet (192.168.x.x.). Lars pointed out that could probably be the cause of some of the routing issues):

  • Sporadic loss of networking for Nova guests. This got resolved (at-least partially) when I invoke VNC of the guest (via SSH tunneling) & do some basic diagnostics, networking comes up just fine in the guests again (GRE tunnels go stale?).tcmpdump analysis on various network devices (tunnels/bridges/tap devices) on both Controller & Compute nodes in progress.
  • Nova guests fail to acquire DHCP leases (I can clearly observe this, when I explicitly do an ifdown eth0 && ifup eth0 from VNC of the guest. Neutron DHCP agent seems to be flaky here.

TIP: On Fedora, openstack-utils package(from version: openstack-utils-2013.2-2.fc21.noarch) includes a neat utility called openstack-service which allows to trivially control OpenStack services. This makes life much easier, thanks to Lars!

Leave a comment

Filed under Uncategorized

KVMForum, LinuxCon/CloudOpen Eu 2013

KVMForum, CloudOpen, LinuxCon and several other co-located events are starting next week in Edinburgh.

Here’s the schedule information.

For those not able to attend in person for KVMForum, it’s currently being planned to broadcast them using Google Hangouts on Air. And, here’s the G+ page for KVMForum 2013.

Edit: Notes from my presentation (http://sched.co/14wjVpS) on Nested Virt (KVM on KVM) — http://kashyapc.fedorapeople.org/virt/lc-2013/nested-virt-kvm-on-kvm-CloudOpen-Eu-2013-Kashyap-Chamarthy.pdf

Leave a comment

Filed under Uncategorized

Fedora 20 Virtualization Test Day — 8OCT2013

Cole Robinson announced Virtualization test day for Fedora 20.

For convenience, here’s what’s needed to get started.

And, as usual — tests can be performed any day from now and the wiki can be updated with your results/observations. Just that on the test day, more virtualization developers will be available to answer questions, etc on IRC (#fedora-test-day on Freenode).

Leave a comment

Filed under Uncategorized

FLOCK 2013, Retrospective

Heya,

FLOCK just concluded last week. Given the very short time-frame, the conference was very well organized! (I know what pains it takes from first hand experience volunteering to organize FUDCon Pune, couple of years ago). While not undermining others’ efforts, I couldn’t agree more with Spot — “To put it bluntly, anything at Flock that you liked was probably her handiwork.”,about Ruth. Her super-efficiency shined through everything at FLOCK.

I attempted to write while in the middle of a couple of sessions, but I just couldn’t context switch. (For instance, I have a partial draft that’s started off with “I’m currently in the middle of Miloslav Trmač’s discussion about Fedora Revamp…”)

Here’s my (verbose) summary of how I spent my time at FLOCK.

Talks that I have attended

  • Matthew Miller’s discussion of “cloud”, and should Fedora care?: This was a very high level overview of the topic. For me, main takeaway was Fedora’s cloud SIG’s near term goals — more visibility, better documentation.
  • Crystal Ball talk/discussion by Stephen Gallagher which discussed where Fedora is going for the next five years. All the discussion and notes is here.
  • Kernel Bug triage, Live by Dave Jones. Dave walked us through the process of triaging a bug. And also introduced to some scripts he wrote to manage bugzilla workflow, and related triaging aspects.
  • Fedora Revamp by Miloslv Trmac — This was more of a discussion about how to improve various aspects in Fedora. On a broad level, various topics discussed: Making Rawhide more useable, need for more automated tests, etc. Previous mailing list discussion thread is here
  • What’s new with SELinux, by Dan Walsh — Top off my memory, I only recall a couple of things that I recall from this talk where Dan discussed: New confined domains, new permissive domains, sepolicy tool chain, and what’s upcoming (he mentioned a newer coreutils, with upgraded cp, mv, install, mkdir commands which provide -Z flag. Some context is here
  • Secure Linux Containers, by Dan Walsh: This was one of my faviourite sessions. I was interested to learn a bit more about containers. OpenStack heavily uses Network Namespaces to provide Networking, and I thought this session would give some high-level context, and I wasn’t disappointed. Dan discussed several topics: Application Sandboxes, Linux Containers, different types of Linux Namespaces (Mount, UTS, IPC, Network, PID, User), Cgroups. He then went to elaborate on different types of Containers (and their use cases): Generic Application Container, Systemd Application Container, Chroot Application Container, libvirt-lxc, virt-sandbox, virt-sandbox-service, systemd-nspawn.
  • PKI made easy: Ade Lee gave an overview of PKI, Dogtag and its integration aspects with FreeIPA. I worked with Ade on this project and associated Red Hat products for three about years. It was nice to meet him in person for first time after all these years.
  • Fedora QA Meeting : On Monday (12-AUG),I participated in with Adam Williamson and rest of the Fedora QA team. Video is here. Major topics:
    • ARM release criteria / test matrix adjustments
    • Visible Cloud release criteria / test matrix adjustments.

Among other sessions, I also participated in the “Hack the Future” (of Fedora) with Matthew Miller. I also enjoyed the conference recap discussion with FESCo (Fedora Engineering Steering Committee).

OpenStack Test Event

On day two of FLOCK, I conducted an OpenStack test event. Earlier I blogged about it here. This session wasn’t recorded, as it’s a hands-on test event. We had about 20 participants (capacity of the room was arond 25).

Some notes:

  • Russell Bryant, nova PTL, was in the room, not feeling qualified enough, I made him give a quick 5 minute introduction of OpenStack :-). Later, Jeff Peeler from OpenStack Heat project also gave a brief introduction about Heat and what it does. RDO community manager Rich Bowen was also present and participated in the event.
  • Notes from the test event is here.
  • Russel Bryant (Thank you!) kindly offered to provide temporary access to virtual machines (from RackSpace cloud) for participants who didn’t have enough hardware on their laptop, to quickly test/setup OpenStack. I know of at-least a couple of people who successfully setup using these temporary VM instances.
  • A couple of people hit the bogus “install successfully finished” bug. Clean-up and re-run wasn’t really straightforward in this case.
  • Another participant hit an issue where packstack adds ‘libvirt_type=kvm’ in nova.conf /despite/ the machine not having hardware virtualization extensions. It should ideally add ‘libvirt_type=qemu’, if hardware extensions weren’t found (this should be double checked). And, at-least one person hit Mysql login credential errors (which I sure hit myself on one of my test runs) with an allinone packstack run.

Overall: given the time frame of 2 hours, and the complexity involved with setting up OpenStack, we had decent participation. At-least I know 5-7 people had it configured and running. Thanks to Russel, Jeff Peeler, Sandro Mathys, Rich Bowen for helping and assisting participants during the test event.

TODOs/Notes/Hallway

These are arbitrary discusssions, notes to self, todos, amusing (to me) snippets from hallway conversations. Let’s see what I can recall.

  • I ran into Luke Macken in the hotel lobby, one of the evenings, we briefly talked about virtualization, and he mentioned he tried PCI passthrough of a sound card with KVM/QEMU, and couldn’t get it working. I said, I’ll try and get to him (Note to self: Add this as 198th item on the TODO list).
  • From discussions with Matthew Miller: we need to switch to Oz from Appliance Creator to generate Fedora Cloud images.
  • Try out Ansible’s OpenStack deployer tool.
  • Had an interesting hallway chat with Bill Nottingham, Miloslav Trmac, in the relaxed environment of the Charleston Aquarium (do /not/ miss this mesmerizing place if you’re in Charleston!). Topics: Various Cloud technologies, mailing list etiquette (the much discussed recent LKML epic thread about conflicts, strong language to convey your points, etc.)
  • Books: On day-1 evening dinner, Paul Frields mentioned ‘Forever war’ as one of his favorites and highly recommended it. Next day, during the evening event, at the Mynt bar, Greg DekonigsBerg said, “anything & everything by Jim Collins”. The same evening, while still discussing books, Tom Callaway said “no, that’s not the right book” (or something along the lines) when I mentioned ‘Elements of Style’. I don’t know what was his reasoning was, I liked the book anyway. :-)
  • I learnt interesting details about life in Czech Republic from conversations with Jan Zeleny.
  • A lot of little/long conversations with Adam Williamson (Fedora QA Czar), Robyn Bergeron Ruth Suehle, Christopher Wickert, Rahul Sundaram, Cole Robinson, Toshio Kuratomi, and all others I missed to name here.
  • Thanks Toshio for the delicious Peaches! (He carried that large box of Peaches in the cabin luggage.) Also thank you for the nice conversation on last Tuesday, and taking us to the breakfast place Kitchen, on King’s street.
  • Also, I tried Google Glass from Jon Masters. But it wasn’t quite intuitive, as I have prescription glasses already.

Leave a comment

Filed under Uncategorized

OpenStack test event at Flock — Fedora Contributor’s conference (AUG 9-12)

Firstly, this post should have been coming from Matthias Runge, long time Fedora contributor, OpenStack Horizon developer. Earlier this May, Matthias proposed a FLOCK hack-fest session for OpenStack to test/fix latest packages on Fedora, and it was accepted. Matthias, unfortunately, cannot make it (Duh, he should have been there!) to Charleston due to personal reasons. He trusted I could handle the swap of places with him (I work with him as part of the Cloud Engineering team at Red Hat). Thanks Matthias!

Secondly, it’s not really flash news that FLOCK (Fedora Contributor’s conference), the first edition of the revamped (and now erstwhile) FUDCON, will be taking place in about two weeks (Aug 9-12) in Charleston, South Carolina.

If you care about Open Source Infrastructure as a Service (Ok, let’s say – Cloud) software, and interested in contributing/learning/deploying, you’re more than welcome to participate (needless to say) in this session. There will also be a couple of core OpenStack developers hanging around during the conference!

Some practical information for the OpenStack test event:

  • Abstract: OpenStack FLOCK test event abstract is here.
  • Prerequisites: This is supposed to be a hands on session where we try to setup and test. Given the nature of OpenStack, if you can have a laptop with 4G (or more) memory, and (at-least) 50G of free disk space, it’ll make life easier while setting up OpenStack.
  • Current milestone packages: OpenStack Havana, milestone-2 packages are here.
  • Trunk Packages: These are built from OpenStack upstream trunk on an hourly basis. As of writing this, Neutron (OpenStack networking) server packages are not yet available (coming soon – people are working hard on this!). nova-network is the temporary recommendation. Further details are in the quickstart instructions
  • Bug tracking: If there are OpenStack Fedora/EL packaging, installation & related issues, file it in RH bugzilla. If you’re testing from upstream trunk, it’s better to file them under upstream issue tracker
  • An etherpad instance with more verbose information here. If you have comments/suggestions/notes, please add it there.

Finally, there are plenty of interesting sessions – check out the schedule!

PS: Damn it, Seth Vidal — I was dearly looking forward for your user builds tools talk :( Just noticed Kyle McMartin is doing it on Seth’s behalf.

3 Comments

Filed under Uncategorized

Configuring Libvirt guests with an Open vSwitch bridge

In the context of OpenStack networking, I was trying to explore Open vSwitch. I felt it’s better to go one step back, and try with a pure libvirt guest before I try it with OpenStack networking.

On why Open vSwitch compared to regular Linux bridge?

  • In short (as Thomas Graf, Kernel networking subsystem developer, put it) — Software Defined Networking(SDN)
  • Open vSwitch’s upstream documentation provides a more detailed explanation.

Here’s a simple scenario, where the machine in test has a single physical NIC, obtaining its IP address from DHCP. And, running KVM guests managed via libvirt.

Install Open vSwitch

Install the Open vSwitch package (this is on Fedora 19):

$ yum install openvswitch -y

Enable the openvswitch systemd unit file, and start the daemon:

$ systemctl enable openvswitch.service
$ systemctl start openvswitch.service

Check the status Open vSwitch service, to ensure it’s ‘Active':

$ systemctl status openvswitch.service

Configure Open vSwitch (OVS) bridge
Before you proceed, ensure to have physical access or access via serial console to the machine, because, associating a physical interface with an Open vSwitch bridge will result in lost connectivity.
Reasoning is here under the ‘Configuration problems’ section.

Add an OVS bridge device:

$ ovs-vsctl add-br ovsbr0

Associate the OVS bridge device to eth0 (or em1). (At this point, network connectivity will be lost.)

$ ovs-vsctl add-port ovsbr0 eth0

I was obtaining IP address to my host from DHCP, so I first cleared it from the physical interface, and associated it with the Open vSwitch bridge device (ovsbr0).

$ ifconfig eth0 0.0.0.0
$ ifconfig ovsbr0 10.xx.yyy.zzz

I killed the existing dhclient instance on ‘eth0′, and initiated it on ovsbr0:

$ dhclient ovsbr0 &

List the OVS database contents

 
$ ovs-vsctl show
    3dc7f3e3-5872-47c0-ba6f-1cb12065f4d0
        Bridge "ovsbr0"
            Port "eth0"
                Interface "eth0"
            Port "ovsbr0"
                Interface "ovsbr0"
                    type: internal
        ovs_version: "1.10.0"

Update libvirt guest’s bridge source

I have an existing KVM guest, managed by libvirt, with its default network source associated with libvirt’s ‘virbr0). Let’s modify its network to Open vSwitch bridge.

Edit the libvirt’s guest XML

$ virsh edit f18vm

The attribute should look as below (take note of the highlighted attributes):

[...]
    <interface type='bridge'>
      <mac address='52:54:00:fb:00:01'/>
      <source bridge='ovsbr0'/>
      <virtualport type='openvswitch'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
[...]

Once the guest XML is edited and saved, dump its contents to stdout, you’ll notice an additional attribute interfaceid added automatically:

    $ virsh dumpxml f18vm | grep bridge -A8
       <interface type='bridge'>
         <mac address='52:54:00:fb:00:01'/>
         <source bridge='ovsbr0'/>
         <virtualport type='openvswitch'>
           <parameters interfaceid='74b6858e-8012-4caa-85c7-b64902a19605'/>
         </virtualport>
         <model type='virtio'/>
         <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
       </interface>
       <serial type='pty'>
         <target port='0'/>

Start the guest, and check if it’s IP address matches the host subnet:

$ virsh start fed18vm --console
$ ifconfig eth0

5 Comments

Filed under Uncategorized

Unattended F19 guest creation with Oz

Oz has been in development for a couple of years, which lets you install various guest operating systems with minimal user-input.

A simple wrapper script is here

Usage:
$ yum install oz -y
$  ./oz-jeos.bash guest-name distro
      'distro': f19, f18
       Examples: oz-jeos.bash f19-jeos f19  # create f19
                 oz-jeos.bash f18-jeos f18  # create f18

If you prefer to invoke manually…

Create a TDL (Template Description Language) file:

$ cat << EOF > f19.tdl
<template>
  <name>$NAME</name>
  <os>
    <name>Fedora</name>
    <version>19</version>
    <arch>x86_64</arch>
    <install type='url'>
      <url>http://dl.fedoraproject.org/pub/fedora/linux/releases/19/Fedora/x86_64/os/</url>
    </install>
    <rootpw>fedora</rootpw>
  </os>
  <description>Fedora 19</description>
  <disk>
    <size>25</size>
  </disk>
</template>

NOTE: From the above TDL file, you can elide the disk attribute if you don’t need 25G of disk. The default disk size is 10G.

Invoke Oz:

$ oz-install -d 4 f19.tdl 2>&1 \
  | tee /var/tmp/f19-oz-log.txt

4 Comments

Filed under Uncategorized

F18 -> F19 distro-sync with yum

I woke up this morning to see my F18 instance not giving me the GNOME login screen. It just hangs after systemd brings up wpa_supplicant.service. And, throws this message:

systemd-readahead

Failed to read event: value too large for defined datatype

I fired up an F18 live-usb to see open a browser and check if there any existing bugs. Indeed there was one. It’s in ASSIGNED state. But I haven’t delved deep into the issue.

I quickly tried a few things :

  • Start the systemd-readahead-replay service manually
     $ systemctl start systemd-readahead-replay.service</ 
  • Try to invoke Xorg from a virtual terminal with the below CLI
    $ Xorg :0 -background none -verbose \
      -seat seat0 -nolisten tcp vt1
  • Boot into runlevel 3, and type init 5
  • Remove quiet from Kernel command-line, to find more verbos logs. No errors absolutely in /var/log/Xorg.* or /var/log/messages

All to no avail. I was drawing a blank and didn’t have any alternative, I’m working remote currently, and I only have this Lenovo X220 with me. I decided to just upgrade to Schrödinger’s Cat (i.e. Fedora-19). Why not? – I’ve been working for months with Fedora-19 composes on more than 5 servers just fine (but they’re all just minimal @core only virt hosts). I invoked the below command and went for my lunch…

$ yum update yum; yum clean all \
  yum --releasever=19 distro-sync --nogpgcheck -y

…voila — after download/update/cleanup/verify of 4770 packages, Schrödinger’s Cat is ready !

9 Comments

Filed under Uncategorized

Fedora 19 Virtualization Test Day — 28MAY2013

Heya,

Rich Jones already mentioned it a couple of days ago. This is just another gentle reminder.

The test day wiki page has all the information. Also, Kamil Paral has kindly put together (thanks!) a test day specific Fedora 19 image .

Goes without saying, you can always update the results before/after the test day.

IRC – #fedora-test-day

Leave a comment

Filed under Uncategorized

Nested Virtualization — KVM, Intel, with VMCS Shadowing

[Previous installments on Nested Virtualization with KVM and Intel.]

This is part of some recent testing that I’ve been doing with upstream KVM (for 3.10.1). The threads linked here has initial tests bench-marking kernel compile (with make defconfig, a default config file) times in L2. And some minimal guestfish appliance start-up timings in L1.

Some details:

  • Setup information to test with VMCS (Virtual Machine Control Structure) Shadowing. In brief, VMCS Shadowing — a processor specific feature — as described upstream, can reduce the overhead of nested virtualization by reducing the number of VMExits from L1 to L0.
  • Simple scripts used to create L1 and L2.
  • Libvirt XMLs of L1, L2 guests, for reference.

The gritty details of reasons for VMExits are described in Intel architecture manuals, Volume 3b, APPENDIX 1.

1 Comment

Filed under Uncategorized