VIRSH

Virsh provisioner is explicitly designed to be used for setup of virtual environments. Such environments are used to emulate production environment like tripleo-undercloud instances on one baremetal machine. It requires one prepared baremetal host (designated hypervisor) to be reachable through SSH initially.

Requirements

First, Libvirt and KVM are installed and configured to provide a virtualized environment. Then, virtual machines are created for all requested nodes.

Topology

The first thing you need to decide before you deploy your environment is the Topology. This refers to the number and type of VMs in your desired deployment environment. If we use OpenStack as an example, a topology may look something like:

  • 1 VM called undercloud
  • 1 VM called controller
  • 1 VM called compute

To control how each VM is created, we have creates a YAML file that describes the specification of each VM. For more information about the structure of the topology files and how to create your own, please refer to Topology.

Please see Bootstrap guide where usage is demonstrated.

Network layout

Baremetal machine used as host for such setup is called hypervisor. The whole deployment is designed to work within boundaries of this machine and (except public/natted traffic) shouldn’t reach beyond. The following layout is part of default setup defined in plugins defaults:

      hypervisor
          |
          +--------+ nic0 - public IP
          |
          +--------+ nic1 - not managed
          |
            ...                                              Libvirt VM's
          |                                                        |
    ------+--------+ data bridge (ctlplane, 192.0.2/24)            +------+ data (nic0)
    |     |                                                        |
libvirt --+--------+ management bridge (nat, dhcp, 172.16.0/24)    +------+ managementnt (nic1)
    |     |                                                        |
    ------+--------+ external bridge (nat, dhcp, 10.0.0/24)        +------+ external (nic2)

On hypervisor, there are 3 new bridges created with libvirt - data, management and external. Most important is data network which does not have DHCP and NAT enabled. This network can later be used as ctlplane for OSP director deployments (tripleo-undercloud). Other (usually physical) interfaces are not used (nic0, nic1, ...) except for public/natted traffic. External network is used for SSH forwarding so client (or Ansible) can access dynamically created nodes.

NAT Forwarding

By default, all networks above are NATed, meaning that they private networks only reachable via the hypervisor node. infrared configures the nodes SSH connection to use the hypervisor host as proxy.

Bridged Network

Some use-cases call for direct access to some of the nodes. This is achieved by adding a network with forward: bridge in its attributes to the network-topology file, and marking this network as external network on the relevant node files.

The result will create a virtual bridge on the hypervisor connected to the main NIC. VMs attached to this bridge will be served by the same LAN as the hypervisor.

Warning

Be careful when using this feature. For example, an undercloud connected in this manner can disrupt the LAN by serving as an unauthorized DHCP server.

Fore example, see tripleo node used in conjunction with 3_net_1_bridge network file:

infrared virsh [...] --topology-nodes ironic:1,[...] --topology-network 3_net_1_bridge [...]

Workflow

  1. Setup libvirt and kvm environment
  2. Setup libvirt networks
  3. Download base image for undercloud (--image-url)
  4. Create desired amount of images and integrate to libvirt
  5. Define virtual machines with requested parameters (--topology-nodes)
  6. Start virtual machines

Environments prepared such in way are usually used as basic virtual infrastructure for tripleo-undercloud.

Note

Virsh provisioner has idempotency issues, so infrared virsh ... --cleanup must be run before reprovisioning every time.